2026-03-28 00:00:07.799326 | Job console starting 2026-03-28 00:00:07.818005 | Updating git repos 2026-03-28 00:00:08.220299 | Cloning repos into workspace 2026-03-28 00:00:08.476275 | Restoring repo states 2026-03-28 00:00:08.505644 | Merging changes 2026-03-28 00:00:08.505664 | Checking out repos 2026-03-28 00:00:08.933881 | Preparing playbooks 2026-03-28 00:00:10.065618 | Running Ansible setup 2026-03-28 00:00:17.274515 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-03-28 00:00:18.642459 | 2026-03-28 00:00:18.642577 | PLAY [Base pre] 2026-03-28 00:00:18.665352 | 2026-03-28 00:00:18.665485 | TASK [Setup log path fact] 2026-03-28 00:00:18.716711 | orchestrator | ok 2026-03-28 00:00:18.778879 | 2026-03-28 00:00:18.779005 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-28 00:00:18.841316 | orchestrator | ok 2026-03-28 00:00:18.857593 | 2026-03-28 00:00:18.857696 | TASK [emit-job-header : Print job information] 2026-03-28 00:00:18.978126 | # Job Information 2026-03-28 00:00:18.978300 | Ansible Version: 2.16.14 2026-03-28 00:00:18.978332 | Job: testbed-deploy-next-in-a-nutshell-with-tempest-ubuntu-24.04 2026-03-28 00:00:18.978359 | Pipeline: periodic-midnight 2026-03-28 00:00:18.978378 | Executor: 521e9411259a 2026-03-28 00:00:18.978394 | Triggered by: https://github.com/osism/testbed 2026-03-28 00:00:18.978411 | Event ID: 7d11dc1fbab545418744be3ecae96668 2026-03-28 00:00:18.986380 | 2026-03-28 00:00:18.986476 | LOOP [emit-job-header : Print node information] 2026-03-28 00:00:19.215128 | orchestrator | ok: 2026-03-28 00:00:19.215299 | orchestrator | # Node Information 2026-03-28 00:00:19.215329 | orchestrator | Inventory Hostname: orchestrator 2026-03-28 00:00:19.215349 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-03-28 00:00:19.215366 | orchestrator | Username: zuul-testbed04 2026-03-28 00:00:19.215383 | orchestrator | Distro: Debian 12.13 2026-03-28 00:00:19.215402 | orchestrator | Provider: static-testbed 2026-03-28 00:00:19.215468 | orchestrator | Region: 2026-03-28 00:00:19.215492 | orchestrator | Label: testbed-orchestrator 2026-03-28 00:00:19.215509 | orchestrator | Product Name: OpenStack Nova 2026-03-28 00:00:19.215525 | orchestrator | Interface IP: 81.163.193.140 2026-03-28 00:00:19.228923 | 2026-03-28 00:00:19.229017 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-03-28 00:00:20.765586 | orchestrator -> localhost | changed 2026-03-28 00:00:20.772003 | 2026-03-28 00:00:20.772091 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-03-28 00:00:22.615743 | orchestrator -> localhost | changed 2026-03-28 00:00:22.640300 | 2026-03-28 00:00:22.640413 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-03-28 00:00:23.459650 | orchestrator -> localhost | ok 2026-03-28 00:00:23.467266 | 2026-03-28 00:00:23.467376 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-03-28 00:00:23.528581 | orchestrator | ok 2026-03-28 00:00:23.614918 | orchestrator | included: /var/lib/zuul/builds/dbc4c42d8cae461abd33bd0788dfae71/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-03-28 00:00:23.676714 | 2026-03-28 00:00:23.676851 | TASK [add-build-sshkey : Create Temp SSH key] 2026-03-28 00:00:27.269515 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-03-28 00:00:27.270346 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/dbc4c42d8cae461abd33bd0788dfae71/work/dbc4c42d8cae461abd33bd0788dfae71_id_rsa 2026-03-28 00:00:27.270404 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/dbc4c42d8cae461abd33bd0788dfae71/work/dbc4c42d8cae461abd33bd0788dfae71_id_rsa.pub 2026-03-28 00:00:27.270428 | orchestrator -> localhost | The key fingerprint is: 2026-03-28 00:00:27.270449 | orchestrator -> localhost | SHA256:Y/gclNzhNj1z6hmPkt+RrLXJC4knKoP37Atzy7Hz+rY zuul-build-sshkey 2026-03-28 00:00:27.270468 | orchestrator -> localhost | The key's randomart image is: 2026-03-28 00:00:27.270495 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-03-28 00:00:27.270513 | orchestrator -> localhost | | . | 2026-03-28 00:00:27.270531 | orchestrator -> localhost | | . + o | 2026-03-28 00:00:27.270548 | orchestrator -> localhost | | + = + . | 2026-03-28 00:00:27.270564 | orchestrator -> localhost | | o . . = | 2026-03-28 00:00:27.270581 | orchestrator -> localhost | | . S o | 2026-03-28 00:00:27.270605 | orchestrator -> localhost | | + o + B . | 2026-03-28 00:00:27.270623 | orchestrator -> localhost | | .o = = B * | 2026-03-28 00:00:27.270639 | orchestrator -> localhost | | . +*.=.= * + | 2026-03-28 00:00:27.270656 | orchestrator -> localhost | | . =@BE.o *. | 2026-03-28 00:00:27.270672 | orchestrator -> localhost | +----[SHA256]-----+ 2026-03-28 00:00:27.270723 | orchestrator -> localhost | ok: Runtime: 0:00:01.782899 2026-03-28 00:00:27.279387 | 2026-03-28 00:00:27.279467 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-03-28 00:00:27.349997 | orchestrator | ok 2026-03-28 00:00:27.402594 | orchestrator | included: /var/lib/zuul/builds/dbc4c42d8cae461abd33bd0788dfae71/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-03-28 00:00:27.417477 | 2026-03-28 00:00:27.417564 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-03-28 00:00:27.440377 | orchestrator | skipping: Conditional result was False 2026-03-28 00:00:27.447542 | 2026-03-28 00:00:27.447624 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-03-28 00:00:28.511161 | orchestrator | changed 2026-03-28 00:00:28.516347 | 2026-03-28 00:00:28.516436 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-03-28 00:00:28.839071 | orchestrator | ok 2026-03-28 00:00:28.845111 | 2026-03-28 00:00:28.845212 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-03-28 00:00:29.326910 | orchestrator | ok 2026-03-28 00:00:29.332929 | 2026-03-28 00:00:29.333014 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-03-28 00:00:29.753542 | orchestrator | ok 2026-03-28 00:00:29.759411 | 2026-03-28 00:00:29.759493 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-03-28 00:00:29.816464 | orchestrator | skipping: Conditional result was False 2026-03-28 00:00:29.823000 | 2026-03-28 00:00:29.823092 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-03-28 00:00:30.817079 | orchestrator -> localhost | changed 2026-03-28 00:00:30.829948 | 2026-03-28 00:00:30.830043 | TASK [add-build-sshkey : Add back temp key] 2026-03-28 00:00:31.594064 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/dbc4c42d8cae461abd33bd0788dfae71/work/dbc4c42d8cae461abd33bd0788dfae71_id_rsa (zuul-build-sshkey) 2026-03-28 00:00:31.594325 | orchestrator -> localhost | ok: Runtime: 0:00:00.036394 2026-03-28 00:00:31.600264 | 2026-03-28 00:00:31.600349 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-03-28 00:00:32.095557 | orchestrator | ok 2026-03-28 00:00:32.102475 | 2026-03-28 00:00:32.102569 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-03-28 00:00:32.157502 | orchestrator | skipping: Conditional result was False 2026-03-28 00:00:32.233689 | 2026-03-28 00:00:32.233782 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-03-28 00:00:32.760392 | orchestrator | ok 2026-03-28 00:00:32.776537 | 2026-03-28 00:00:32.776639 | TASK [validate-host : Define zuul_info_dir fact] 2026-03-28 00:00:32.818581 | orchestrator | ok 2026-03-28 00:00:32.826992 | 2026-03-28 00:00:32.827104 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-03-28 00:00:33.702688 | orchestrator -> localhost | ok 2026-03-28 00:00:33.708895 | 2026-03-28 00:00:33.708985 | TASK [validate-host : Collect information about the host] 2026-03-28 00:00:35.800290 | orchestrator | ok 2026-03-28 00:00:35.829753 | 2026-03-28 00:00:35.829860 | TASK [validate-host : Sanitize hostname] 2026-03-28 00:00:35.993878 | orchestrator | ok 2026-03-28 00:00:35.998273 | 2026-03-28 00:00:35.998375 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-03-28 00:00:37.739663 | orchestrator -> localhost | changed 2026-03-28 00:00:37.747836 | 2026-03-28 00:00:37.747928 | TASK [validate-host : Collect information about zuul worker] 2026-03-28 00:00:38.455784 | orchestrator | ok 2026-03-28 00:00:38.468187 | 2026-03-28 00:00:38.468309 | TASK [validate-host : Write out all zuul information for each host] 2026-03-28 00:00:40.259491 | orchestrator -> localhost | changed 2026-03-28 00:00:40.273241 | 2026-03-28 00:00:40.273332 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-03-28 00:00:40.581302 | orchestrator | ok 2026-03-28 00:00:40.586257 | 2026-03-28 00:00:40.586339 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-03-28 00:01:58.188667 | orchestrator | changed: 2026-03-28 00:01:58.188893 | orchestrator | .d..t...... src/ 2026-03-28 00:01:58.188928 | orchestrator | .d..t...... src/github.com/ 2026-03-28 00:01:58.188952 | orchestrator | .d..t...... src/github.com/osism/ 2026-03-28 00:01:58.188974 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-03-28 00:01:58.188994 | orchestrator | RedHat.yml 2026-03-28 00:01:58.209472 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-03-28 00:01:58.209490 | orchestrator | RedHat.yml 2026-03-28 00:01:58.209544 | orchestrator | = 1.53.0"... 2026-03-28 00:02:09.004062 | orchestrator | - Finding hashicorp/local versions matching ">= 2.2.0"... 2026-03-28 00:02:09.023485 | orchestrator | - Finding latest version of hashicorp/null... 2026-03-28 00:02:09.175563 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-03-28 00:02:09.914047 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-03-28 00:02:09.984148 | orchestrator | - Installing hashicorp/local v2.7.0... 2026-03-28 00:02:10.491891 | orchestrator | - Installed hashicorp/local v2.7.0 (signed, key ID 0C0AF313E5FD9F80) 2026-03-28 00:02:10.560920 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-03-28 00:02:13.470118 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-03-28 00:02:13.470176 | orchestrator | 2026-03-28 00:02:13.470182 | orchestrator | Providers are signed by their developers. 2026-03-28 00:02:13.470188 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-03-28 00:02:13.470192 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-03-28 00:02:13.470217 | orchestrator | 2026-03-28 00:02:13.470222 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-03-28 00:02:13.470226 | orchestrator | selections it made above. Include this file in your version control repository 2026-03-28 00:02:13.471141 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-03-28 00:02:13.471150 | orchestrator | you run "tofu init" in the future. 2026-03-28 00:02:13.471157 | orchestrator | 2026-03-28 00:02:13.471161 | orchestrator | OpenTofu has been successfully initialized! 2026-03-28 00:02:13.471165 | orchestrator | 2026-03-28 00:02:13.471169 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-03-28 00:02:13.471173 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-03-28 00:02:13.471177 | orchestrator | should now work. 2026-03-28 00:02:13.471181 | orchestrator | 2026-03-28 00:02:13.471185 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-03-28 00:02:13.471189 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-03-28 00:02:13.471193 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-03-28 00:02:13.636969 | orchestrator | Created and switched to workspace "ci"! 2026-03-28 00:02:13.637024 | orchestrator | 2026-03-28 00:02:13.637030 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-03-28 00:02:13.637036 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-03-28 00:02:13.637040 | orchestrator | for this configuration. 2026-03-28 00:02:13.756552 | orchestrator | ci.auto.tfvars 2026-03-28 00:02:13.777176 | orchestrator | default_custom.tf 2026-03-28 00:02:15.079774 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-03-28 00:02:21.194175 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 6s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-03-28 00:02:21.698955 | orchestrator | 2026-03-28 00:02:21.699023 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-03-28 00:02:21.699032 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-03-28 00:02:21.699037 | orchestrator | + create 2026-03-28 00:02:21.699042 | orchestrator | <= read (data resources) 2026-03-28 00:02:21.699046 | orchestrator | 2026-03-28 00:02:21.699050 | orchestrator | OpenTofu will perform the following actions: 2026-03-28 00:02:21.699062 | orchestrator | 2026-03-28 00:02:21.699066 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-03-28 00:02:21.699070 | orchestrator | # (config refers to values not yet known) 2026-03-28 00:02:21.699074 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-03-28 00:02:21.699078 | orchestrator | + checksum = (known after apply) 2026-03-28 00:02:21.699083 | orchestrator | + created_at = (known after apply) 2026-03-28 00:02:21.699087 | orchestrator | + file = (known after apply) 2026-03-28 00:02:21.699091 | orchestrator | + id = (known after apply) 2026-03-28 00:02:21.699111 | orchestrator | + metadata = (known after apply) 2026-03-28 00:02:21.699115 | orchestrator | + min_disk_gb = (known after apply) 2026-03-28 00:02:21.699119 | orchestrator | + min_ram_mb = (known after apply) 2026-03-28 00:02:21.699123 | orchestrator | + most_recent = true 2026-03-28 00:02:21.699127 | orchestrator | + name = (known after apply) 2026-03-28 00:02:21.699131 | orchestrator | + protected = (known after apply) 2026-03-28 00:02:21.699135 | orchestrator | + region = (known after apply) 2026-03-28 00:02:21.699141 | orchestrator | + schema = (known after apply) 2026-03-28 00:02:21.699145 | orchestrator | + size_bytes = (known after apply) 2026-03-28 00:02:21.699149 | orchestrator | + tags = (known after apply) 2026-03-28 00:02:21.699152 | orchestrator | + updated_at = (known after apply) 2026-03-28 00:02:21.699156 | orchestrator | } 2026-03-28 00:02:21.699162 | orchestrator | 2026-03-28 00:02:21.699166 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-03-28 00:02:21.699170 | orchestrator | # (config refers to values not yet known) 2026-03-28 00:02:21.699174 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-03-28 00:02:21.699178 | orchestrator | + checksum = (known after apply) 2026-03-28 00:02:21.699182 | orchestrator | + created_at = (known after apply) 2026-03-28 00:02:21.699185 | orchestrator | + file = (known after apply) 2026-03-28 00:02:21.699189 | orchestrator | + id = (known after apply) 2026-03-28 00:02:21.699192 | orchestrator | + metadata = (known after apply) 2026-03-28 00:02:21.699196 | orchestrator | + min_disk_gb = (known after apply) 2026-03-28 00:02:21.699200 | orchestrator | + min_ram_mb = (known after apply) 2026-03-28 00:02:21.699203 | orchestrator | + most_recent = true 2026-03-28 00:02:21.699207 | orchestrator | + name = (known after apply) 2026-03-28 00:02:21.699211 | orchestrator | + protected = (known after apply) 2026-03-28 00:02:21.699215 | orchestrator | + region = (known after apply) 2026-03-28 00:02:21.699218 | orchestrator | + schema = (known after apply) 2026-03-28 00:02:21.699222 | orchestrator | + size_bytes = (known after apply) 2026-03-28 00:02:21.699226 | orchestrator | + tags = (known after apply) 2026-03-28 00:02:21.699229 | orchestrator | + updated_at = (known after apply) 2026-03-28 00:02:21.699233 | orchestrator | } 2026-03-28 00:02:21.699237 | orchestrator | 2026-03-28 00:02:21.699241 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-03-28 00:02:21.699244 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-03-28 00:02:21.699248 | orchestrator | + content = (known after apply) 2026-03-28 00:02:21.699252 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-28 00:02:21.699256 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-28 00:02:21.699260 | orchestrator | + content_md5 = (known after apply) 2026-03-28 00:02:21.699264 | orchestrator | + content_sha1 = (known after apply) 2026-03-28 00:02:21.699267 | orchestrator | + content_sha256 = (known after apply) 2026-03-28 00:02:21.699271 | orchestrator | + content_sha512 = (known after apply) 2026-03-28 00:02:21.699275 | orchestrator | + directory_permission = "0777" 2026-03-28 00:02:21.699278 | orchestrator | + file_permission = "0644" 2026-03-28 00:02:21.699282 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-03-28 00:02:21.699286 | orchestrator | + id = (known after apply) 2026-03-28 00:02:21.699289 | orchestrator | } 2026-03-28 00:02:21.699295 | orchestrator | 2026-03-28 00:02:21.699298 | orchestrator | # local_file.id_rsa_pub will be created 2026-03-28 00:02:21.699302 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-03-28 00:02:21.699306 | orchestrator | + content = (known after apply) 2026-03-28 00:02:21.699310 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-28 00:02:21.699313 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-28 00:02:21.699317 | orchestrator | + content_md5 = (known after apply) 2026-03-28 00:02:21.699321 | orchestrator | + content_sha1 = (known after apply) 2026-03-28 00:02:21.699324 | orchestrator | + content_sha256 = (known after apply) 2026-03-28 00:02:21.699328 | orchestrator | + content_sha512 = (known after apply) 2026-03-28 00:02:21.699332 | orchestrator | + directory_permission = "0777" 2026-03-28 00:02:21.699336 | orchestrator | + file_permission = "0644" 2026-03-28 00:02:21.699343 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-03-28 00:02:21.699347 | orchestrator | + id = (known after apply) 2026-03-28 00:02:21.699350 | orchestrator | } 2026-03-28 00:02:21.699354 | orchestrator | 2026-03-28 00:02:21.699364 | orchestrator | # local_file.inventory will be created 2026-03-28 00:02:21.699381 | orchestrator | + resource "local_file" "inventory" { 2026-03-28 00:02:21.699386 | orchestrator | + content = (known after apply) 2026-03-28 00:02:21.699390 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-28 00:02:21.699393 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-28 00:02:21.699397 | orchestrator | + content_md5 = (known after apply) 2026-03-28 00:02:21.699401 | orchestrator | + content_sha1 = (known after apply) 2026-03-28 00:02:21.699405 | orchestrator | + content_sha256 = (known after apply) 2026-03-28 00:02:21.699408 | orchestrator | + content_sha512 = (known after apply) 2026-03-28 00:02:21.699412 | orchestrator | + directory_permission = "0777" 2026-03-28 00:02:21.699416 | orchestrator | + file_permission = "0644" 2026-03-28 00:02:21.699419 | orchestrator | + filename = "inventory.ci" 2026-03-28 00:02:21.699423 | orchestrator | + id = (known after apply) 2026-03-28 00:02:21.699427 | orchestrator | } 2026-03-28 00:02:21.699432 | orchestrator | 2026-03-28 00:02:21.699436 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-03-28 00:02:21.699440 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-03-28 00:02:21.699444 | orchestrator | + content = (sensitive value) 2026-03-28 00:02:21.699447 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-28 00:02:21.699451 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-28 00:02:21.699455 | orchestrator | + content_md5 = (known after apply) 2026-03-28 00:02:21.699458 | orchestrator | + content_sha1 = (known after apply) 2026-03-28 00:02:21.699462 | orchestrator | + content_sha256 = (known after apply) 2026-03-28 00:02:21.699466 | orchestrator | + content_sha512 = (known after apply) 2026-03-28 00:02:21.699469 | orchestrator | + directory_permission = "0700" 2026-03-28 00:02:21.699473 | orchestrator | + file_permission = "0600" 2026-03-28 00:02:21.699477 | orchestrator | + filename = ".id_rsa.ci" 2026-03-28 00:02:21.699480 | orchestrator | + id = (known after apply) 2026-03-28 00:02:21.699484 | orchestrator | } 2026-03-28 00:02:21.699488 | orchestrator | 2026-03-28 00:02:21.699491 | orchestrator | # null_resource.node_semaphore will be created 2026-03-28 00:02:21.699495 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-03-28 00:02:21.699499 | orchestrator | + id = (known after apply) 2026-03-28 00:02:21.699502 | orchestrator | } 2026-03-28 00:02:21.699506 | orchestrator | 2026-03-28 00:02:21.699510 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-03-28 00:02:21.699514 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-03-28 00:02:21.699518 | orchestrator | + attachment = (known after apply) 2026-03-28 00:02:21.699521 | orchestrator | + availability_zone = "nova" 2026-03-28 00:02:21.699525 | orchestrator | + id = (known after apply) 2026-03-28 00:02:21.699529 | orchestrator | + image_id = (known after apply) 2026-03-28 00:02:21.699532 | orchestrator | + metadata = (known after apply) 2026-03-28 00:02:21.699536 | orchestrator | + name = "testbed-volume-manager-base" 2026-03-28 00:02:21.699540 | orchestrator | + region = (known after apply) 2026-03-28 00:02:21.699544 | orchestrator | + size = 80 2026-03-28 00:02:21.699547 | orchestrator | + volume_retype_policy = "never" 2026-03-28 00:02:21.699551 | orchestrator | + volume_type = "ssd" 2026-03-28 00:02:21.699555 | orchestrator | } 2026-03-28 00:02:21.699560 | orchestrator | 2026-03-28 00:02:21.699564 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-03-28 00:02:21.699568 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-28 00:02:21.699571 | orchestrator | + attachment = (known after apply) 2026-03-28 00:02:21.699575 | orchestrator | + availability_zone = "nova" 2026-03-28 00:02:21.699579 | orchestrator | + id = (known after apply) 2026-03-28 00:02:21.699586 | orchestrator | + image_id = (known after apply) 2026-03-28 00:02:21.699590 | orchestrator | + metadata = (known after apply) 2026-03-28 00:02:21.699594 | orchestrator | + name = "testbed-volume-0-node-base" 2026-03-28 00:02:21.699597 | orchestrator | + region = (known after apply) 2026-03-28 00:02:21.699601 | orchestrator | + size = 80 2026-03-28 00:02:21.699605 | orchestrator | + volume_retype_policy = "never" 2026-03-28 00:02:21.699608 | orchestrator | + volume_type = "ssd" 2026-03-28 00:02:21.699612 | orchestrator | } 2026-03-28 00:02:21.699616 | orchestrator | 2026-03-28 00:02:21.699620 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-03-28 00:02:21.699623 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-28 00:02:21.699627 | orchestrator | + attachment = (known after apply) 2026-03-28 00:02:21.699631 | orchestrator | + availability_zone = "nova" 2026-03-28 00:02:21.699634 | orchestrator | + id = (known after apply) 2026-03-28 00:02:21.699638 | orchestrator | + image_id = (known after apply) 2026-03-28 00:02:21.699642 | orchestrator | + metadata = (known after apply) 2026-03-28 00:02:21.699646 | orchestrator | + name = "testbed-volume-1-node-base" 2026-03-28 00:02:21.699649 | orchestrator | + region = (known after apply) 2026-03-28 00:02:21.699653 | orchestrator | + size = 80 2026-03-28 00:02:21.699657 | orchestrator | + volume_retype_policy = "never" 2026-03-28 00:02:21.699661 | orchestrator | + volume_type = "ssd" 2026-03-28 00:02:21.699664 | orchestrator | } 2026-03-28 00:02:21.699668 | orchestrator | 2026-03-28 00:02:21.699672 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-03-28 00:02:21.699675 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-28 00:02:21.699679 | orchestrator | + attachment = (known after apply) 2026-03-28 00:02:21.699683 | orchestrator | + availability_zone = "nova" 2026-03-28 00:02:21.699686 | orchestrator | + id = (known after apply) 2026-03-28 00:02:21.699690 | orchestrator | + image_id = (known after apply) 2026-03-28 00:02:21.699694 | orchestrator | + metadata = (known after apply) 2026-03-28 00:02:21.699697 | orchestrator | + name = "testbed-volume-2-node-base" 2026-03-28 00:02:21.699701 | orchestrator | + region = (known after apply) 2026-03-28 00:02:21.699704 | orchestrator | + size = 80 2026-03-28 00:02:21.699708 | orchestrator | + volume_retype_policy = "never" 2026-03-28 00:02:21.699712 | orchestrator | + volume_type = "ssd" 2026-03-28 00:02:21.699716 | orchestrator | } 2026-03-28 00:02:21.699721 | orchestrator | 2026-03-28 00:02:21.699725 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-03-28 00:02:21.699728 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-28 00:02:21.699732 | orchestrator | + attachment = (known after apply) 2026-03-28 00:02:21.699736 | orchestrator | + availability_zone = "nova" 2026-03-28 00:02:21.699739 | orchestrator | + id = (known after apply) 2026-03-28 00:02:21.699743 | orchestrator | + image_id = (known after apply) 2026-03-28 00:02:21.699747 | orchestrator | + metadata = (known after apply) 2026-03-28 00:02:21.699753 | orchestrator | + name = "testbed-volume-3-node-base" 2026-03-28 00:02:21.699756 | orchestrator | + region = (known after apply) 2026-03-28 00:02:21.699760 | orchestrator | + size = 80 2026-03-28 00:02:21.699764 | orchestrator | + volume_retype_policy = "never" 2026-03-28 00:02:21.699767 | orchestrator | + volume_type = "ssd" 2026-03-28 00:02:21.699771 | orchestrator | } 2026-03-28 00:02:21.699775 | orchestrator | 2026-03-28 00:02:21.699778 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-03-28 00:02:21.699782 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-28 00:02:21.699786 | orchestrator | + attachment = (known after apply) 2026-03-28 00:02:21.699790 | orchestrator | + availability_zone = "nova" 2026-03-28 00:02:21.699793 | orchestrator | + id = (known after apply) 2026-03-28 00:02:21.699805 | orchestrator | + image_id = (known after apply) 2026-03-28 00:02:21.699808 | orchestrator | + metadata = (known after apply) 2026-03-28 00:02:21.699812 | orchestrator | + name = "testbed-volume-4-node-base" 2026-03-28 00:02:21.699816 | orchestrator | + region = (known after apply) 2026-03-28 00:02:21.699819 | orchestrator | + size = 80 2026-03-28 00:02:21.699823 | orchestrator | + volume_retype_policy = "never" 2026-03-28 00:02:21.699827 | orchestrator | + volume_type = "ssd" 2026-03-28 00:02:21.699831 | orchestrator | } 2026-03-28 00:02:21.699834 | orchestrator | 2026-03-28 00:02:21.699838 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-03-28 00:02:21.699842 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-28 00:02:21.699845 | orchestrator | + attachment = (known after apply) 2026-03-28 00:02:21.699849 | orchestrator | + availability_zone = "nova" 2026-03-28 00:02:21.699853 | orchestrator | + id = (known after apply) 2026-03-28 00:02:21.699856 | orchestrator | + image_id = (known after apply) 2026-03-28 00:02:21.699860 | orchestrator | + metadata = (known after apply) 2026-03-28 00:02:21.699864 | orchestrator | + name = "testbed-volume-5-node-base" 2026-03-28 00:02:21.699867 | orchestrator | + region = (known after apply) 2026-03-28 00:02:21.699871 | orchestrator | + size = 80 2026-03-28 00:02:21.699875 | orchestrator | + volume_retype_policy = "never" 2026-03-28 00:02:21.699878 | orchestrator | + volume_type = "ssd" 2026-03-28 00:02:21.699883 | orchestrator | } 2026-03-28 00:02:21.699891 | orchestrator | 2026-03-28 00:02:21.699896 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-03-28 00:02:21.699903 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-28 00:02:21.699909 | orchestrator | + attachment = (known after apply) 2026-03-28 00:02:21.699914 | orchestrator | + availability_zone = "nova" 2026-03-28 00:02:21.699920 | orchestrator | + id = (known after apply) 2026-03-28 00:02:21.699926 | orchestrator | + metadata = (known after apply) 2026-03-28 00:02:21.699932 | orchestrator | + name = "testbed-volume-0-node-3" 2026-03-28 00:02:21.699938 | orchestrator | + region = (known after apply) 2026-03-28 00:02:21.699944 | orchestrator | + size = 20 2026-03-28 00:02:21.699950 | orchestrator | + volume_retype_policy = "never" 2026-03-28 00:02:21.699956 | orchestrator | + volume_type = "ssd" 2026-03-28 00:02:21.699962 | orchestrator | } 2026-03-28 00:02:21.699966 | orchestrator | 2026-03-28 00:02:21.699969 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-03-28 00:02:21.699973 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-28 00:02:21.699977 | orchestrator | + attachment = (known after apply) 2026-03-28 00:02:21.699980 | orchestrator | + availability_zone = "nova" 2026-03-28 00:02:21.699984 | orchestrator | + id = (known after apply) 2026-03-28 00:02:21.699988 | orchestrator | + metadata = (known after apply) 2026-03-28 00:02:21.699991 | orchestrator | + name = "testbed-volume-1-node-4" 2026-03-28 00:02:21.699995 | orchestrator | + region = (known after apply) 2026-03-28 00:02:21.699999 | orchestrator | + size = 20 2026-03-28 00:02:21.700002 | orchestrator | + volume_retype_policy = "never" 2026-03-28 00:02:21.700006 | orchestrator | + volume_type = "ssd" 2026-03-28 00:02:21.700010 | orchestrator | } 2026-03-28 00:02:21.700013 | orchestrator | 2026-03-28 00:02:21.700017 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-03-28 00:02:21.700021 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-28 00:02:21.700025 | orchestrator | + attachment = (known after apply) 2026-03-28 00:02:21.700028 | orchestrator | + availability_zone = "nova" 2026-03-28 00:02:21.700032 | orchestrator | + id = (known after apply) 2026-03-28 00:02:21.700036 | orchestrator | + metadata = (known after apply) 2026-03-28 00:02:21.700039 | orchestrator | + name = "testbed-volume-2-node-5" 2026-03-28 00:02:21.700043 | orchestrator | + region = (known after apply) 2026-03-28 00:02:21.700050 | orchestrator | + size = 20 2026-03-28 00:02:21.700054 | orchestrator | + volume_retype_policy = "never" 2026-03-28 00:02:21.700057 | orchestrator | + volume_type = "ssd" 2026-03-28 00:02:21.700061 | orchestrator | } 2026-03-28 00:02:21.700065 | orchestrator | 2026-03-28 00:02:21.700068 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-03-28 00:02:21.700072 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-28 00:02:21.700076 | orchestrator | + attachment = (known after apply) 2026-03-28 00:02:21.700079 | orchestrator | + availability_zone = "nova" 2026-03-28 00:02:21.700083 | orchestrator | + id = (known after apply) 2026-03-28 00:02:21.700087 | orchestrator | + metadata = (known after apply) 2026-03-28 00:02:21.700090 | orchestrator | + name = "testbed-volume-3-node-3" 2026-03-28 00:02:21.700094 | orchestrator | + region = (known after apply) 2026-03-28 00:02:21.700098 | orchestrator | + size = 20 2026-03-28 00:02:21.700101 | orchestrator | + volume_retype_policy = "never" 2026-03-28 00:02:21.700105 | orchestrator | + volume_type = "ssd" 2026-03-28 00:02:21.700108 | orchestrator | } 2026-03-28 00:02:21.700114 | orchestrator | 2026-03-28 00:02:21.700118 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-03-28 00:02:21.700122 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-28 00:02:21.700125 | orchestrator | + attachment = (known after apply) 2026-03-28 00:02:21.700129 | orchestrator | + availability_zone = "nova" 2026-03-28 00:02:21.700132 | orchestrator | + id = (known after apply) 2026-03-28 00:02:21.700136 | orchestrator | + metadata = (known after apply) 2026-03-28 00:02:21.700140 | orchestrator | + name = "testbed-volume-4-node-4" 2026-03-28 00:02:21.700143 | orchestrator | + region = (known after apply) 2026-03-28 00:02:21.700150 | orchestrator | + size = 20 2026-03-28 00:02:21.700154 | orchestrator | + volume_retype_policy = "never" 2026-03-28 00:02:21.700157 | orchestrator | + volume_type = "ssd" 2026-03-28 00:02:21.700161 | orchestrator | } 2026-03-28 00:02:21.700165 | orchestrator | 2026-03-28 00:02:21.700168 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-03-28 00:02:21.700172 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-28 00:02:21.700176 | orchestrator | + attachment = (known after apply) 2026-03-28 00:02:21.700179 | orchestrator | + availability_zone = "nova" 2026-03-28 00:02:21.700183 | orchestrator | + id = (known after apply) 2026-03-28 00:02:21.700187 | orchestrator | + metadata = (known after apply) 2026-03-28 00:02:21.700190 | orchestrator | + name = "testbed-volume-5-node-5" 2026-03-28 00:02:21.700194 | orchestrator | + region = (known after apply) 2026-03-28 00:02:21.700197 | orchestrator | + size = 20 2026-03-28 00:02:21.700201 | orchestrator | + volume_retype_policy = "never" 2026-03-28 00:02:21.700205 | orchestrator | + volume_type = "ssd" 2026-03-28 00:02:21.700208 | orchestrator | } 2026-03-28 00:02:21.700212 | orchestrator | 2026-03-28 00:02:21.700216 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-03-28 00:02:21.700219 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-28 00:02:21.700223 | orchestrator | + attachment = (known after apply) 2026-03-28 00:02:21.700227 | orchestrator | + availability_zone = "nova" 2026-03-28 00:02:21.700230 | orchestrator | + id = (known after apply) 2026-03-28 00:02:21.700234 | orchestrator | + metadata = (known after apply) 2026-03-28 00:02:21.700238 | orchestrator | + name = "testbed-volume-6-node-3" 2026-03-28 00:02:21.700241 | orchestrator | + region = (known after apply) 2026-03-28 00:02:21.700245 | orchestrator | + size = 20 2026-03-28 00:02:21.700249 | orchestrator | + volume_retype_policy = "never" 2026-03-28 00:02:21.700252 | orchestrator | + volume_type = "ssd" 2026-03-28 00:02:21.700256 | orchestrator | } 2026-03-28 00:02:21.700260 | orchestrator | 2026-03-28 00:02:21.700263 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-03-28 00:02:21.700267 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-28 00:02:21.700274 | orchestrator | + attachment = (known after apply) 2026-03-28 00:02:21.700278 | orchestrator | + availability_zone = "nova" 2026-03-28 00:02:21.700281 | orchestrator | + id = (known after apply) 2026-03-28 00:02:21.700285 | orchestrator | + metadata = (known after apply) 2026-03-28 00:02:21.700289 | orchestrator | + name = "testbed-volume-7-node-4" 2026-03-28 00:02:21.700292 | orchestrator | + region = (known after apply) 2026-03-28 00:02:21.700296 | orchestrator | + size = 20 2026-03-28 00:02:21.700300 | orchestrator | + volume_retype_policy = "never" 2026-03-28 00:02:21.700303 | orchestrator | + volume_type = "ssd" 2026-03-28 00:02:21.700307 | orchestrator | } 2026-03-28 00:02:21.700311 | orchestrator | 2026-03-28 00:02:21.700314 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-03-28 00:02:21.700318 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-28 00:02:21.700322 | orchestrator | + attachment = (known after apply) 2026-03-28 00:02:21.700325 | orchestrator | + availability_zone = "nova" 2026-03-28 00:02:21.700329 | orchestrator | + id = (known after apply) 2026-03-28 00:02:21.700333 | orchestrator | + metadata = (known after apply) 2026-03-28 00:02:21.700337 | orchestrator | + name = "testbed-volume-8-node-5" 2026-03-28 00:02:21.700340 | orchestrator | + region = (known after apply) 2026-03-28 00:02:21.700344 | orchestrator | + size = 20 2026-03-28 00:02:21.700348 | orchestrator | + volume_retype_policy = "never" 2026-03-28 00:02:21.700351 | orchestrator | + volume_type = "ssd" 2026-03-28 00:02:21.700355 | orchestrator | } 2026-03-28 00:02:21.700360 | orchestrator | 2026-03-28 00:02:21.700364 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-03-28 00:02:21.700383 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-03-28 00:02:21.700387 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-28 00:02:21.700390 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-28 00:02:21.700394 | orchestrator | + all_metadata = (known after apply) 2026-03-28 00:02:21.700398 | orchestrator | + all_tags = (known after apply) 2026-03-28 00:02:21.700402 | orchestrator | + availability_zone = "nova" 2026-03-28 00:02:21.700405 | orchestrator | + config_drive = true 2026-03-28 00:02:21.700409 | orchestrator | + created = (known after apply) 2026-03-28 00:02:21.700413 | orchestrator | + flavor_id = (known after apply) 2026-03-28 00:02:21.700416 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-03-28 00:02:21.700420 | orchestrator | + force_delete = false 2026-03-28 00:02:21.700424 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-28 00:02:21.700427 | orchestrator | + id = (known after apply) 2026-03-28 00:02:21.700431 | orchestrator | + image_id = (known after apply) 2026-03-28 00:02:21.700434 | orchestrator | + image_name = (known after apply) 2026-03-28 00:02:21.700438 | orchestrator | + key_pair = "testbed" 2026-03-28 00:02:21.700442 | orchestrator | + name = "testbed-manager" 2026-03-28 00:02:21.700445 | orchestrator | + power_state = "active" 2026-03-28 00:02:21.700449 | orchestrator | + region = (known after apply) 2026-03-28 00:02:21.700453 | orchestrator | + security_groups = (known after apply) 2026-03-28 00:02:21.700456 | orchestrator | + stop_before_destroy = false 2026-03-28 00:02:21.700460 | orchestrator | + updated = (known after apply) 2026-03-28 00:02:21.700464 | orchestrator | + user_data = (sensitive value) 2026-03-28 00:02:21.700467 | orchestrator | 2026-03-28 00:02:21.700471 | orchestrator | + block_device { 2026-03-28 00:02:21.700475 | orchestrator | + boot_index = 0 2026-03-28 00:02:21.700479 | orchestrator | + delete_on_termination = false 2026-03-28 00:02:21.700485 | orchestrator | + destination_type = "volume" 2026-03-28 00:02:21.700489 | orchestrator | + multiattach = false 2026-03-28 00:02:21.700493 | orchestrator | + source_type = "volume" 2026-03-28 00:02:21.700496 | orchestrator | + uuid = (known after apply) 2026-03-28 00:02:21.700503 | orchestrator | } 2026-03-28 00:02:21.700507 | orchestrator | 2026-03-28 00:02:21.700511 | orchestrator | + network { 2026-03-28 00:02:21.700514 | orchestrator | + access_network = false 2026-03-28 00:02:21.700518 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-28 00:02:21.700522 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-28 00:02:21.700525 | orchestrator | + mac = (known after apply) 2026-03-28 00:02:21.700529 | orchestrator | + name = (known after apply) 2026-03-28 00:02:21.700533 | orchestrator | + port = (known after apply) 2026-03-28 00:02:21.700536 | orchestrator | + uuid = (known after apply) 2026-03-28 00:02:21.700540 | orchestrator | } 2026-03-28 00:02:21.700544 | orchestrator | } 2026-03-28 00:02:21.700549 | orchestrator | 2026-03-28 00:02:21.700553 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-03-28 00:02:21.700557 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-28 00:02:21.700560 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-28 00:02:21.700564 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-28 00:02:21.700568 | orchestrator | + all_metadata = (known after apply) 2026-03-28 00:02:21.700571 | orchestrator | + all_tags = (known after apply) 2026-03-28 00:02:21.700575 | orchestrator | + availability_zone = "nova" 2026-03-28 00:02:21.700579 | orchestrator | + config_drive = true 2026-03-28 00:02:21.700582 | orchestrator | + created = (known after apply) 2026-03-28 00:02:21.700586 | orchestrator | + flavor_id = (known after apply) 2026-03-28 00:02:21.700590 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-28 00:02:21.700593 | orchestrator | + force_delete = false 2026-03-28 00:02:21.700597 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-28 00:02:21.700601 | orchestrator | + id = (known after apply) 2026-03-28 00:02:21.700605 | orchestrator | + image_id = (known after apply) 2026-03-28 00:02:21.700608 | orchestrator | + image_name = (known after apply) 2026-03-28 00:02:21.700612 | orchestrator | + key_pair = "testbed" 2026-03-28 00:02:21.700616 | orchestrator | + name = "testbed-node-0" 2026-03-28 00:02:21.700619 | orchestrator | + power_state = "active" 2026-03-28 00:02:21.700623 | orchestrator | + region = (known after apply) 2026-03-28 00:02:21.700627 | orchestrator | + security_groups = (known after apply) 2026-03-28 00:02:21.700630 | orchestrator | + stop_before_destroy = false 2026-03-28 00:02:21.700634 | orchestrator | + updated = (known after apply) 2026-03-28 00:02:21.700638 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-28 00:02:21.700641 | orchestrator | 2026-03-28 00:02:21.700645 | orchestrator | + block_device { 2026-03-28 00:02:21.700649 | orchestrator | + boot_index = 0 2026-03-28 00:02:21.700652 | orchestrator | + delete_on_termination = false 2026-03-28 00:02:21.700656 | orchestrator | + destination_type = "volume" 2026-03-28 00:02:21.700660 | orchestrator | + multiattach = false 2026-03-28 00:02:21.700663 | orchestrator | + source_type = "volume" 2026-03-28 00:02:21.700667 | orchestrator | + uuid = (known after apply) 2026-03-28 00:02:21.700671 | orchestrator | } 2026-03-28 00:02:21.700675 | orchestrator | 2026-03-28 00:02:21.700678 | orchestrator | + network { 2026-03-28 00:02:21.700682 | orchestrator | + access_network = false 2026-03-28 00:02:21.700686 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-28 00:02:21.700689 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-28 00:02:21.700693 | orchestrator | + mac = (known after apply) 2026-03-28 00:02:21.700697 | orchestrator | + name = (known after apply) 2026-03-28 00:02:21.700700 | orchestrator | + port = (known after apply) 2026-03-28 00:02:21.700704 | orchestrator | + uuid = (known after apply) 2026-03-28 00:02:21.700708 | orchestrator | } 2026-03-28 00:02:21.700712 | orchestrator | } 2026-03-28 00:02:21.700717 | orchestrator | 2026-03-28 00:02:21.700721 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-03-28 00:02:21.700724 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-28 00:02:21.700728 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-28 00:02:21.700746 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-28 00:02:21.700750 | orchestrator | + all_metadata = (known after apply) 2026-03-28 00:02:21.700754 | orchestrator | + all_tags = (known after apply) 2026-03-28 00:02:21.700757 | orchestrator | + availability_zone = "nova" 2026-03-28 00:02:21.700761 | orchestrator | + config_drive = true 2026-03-28 00:02:21.700765 | orchestrator | + created = (known after apply) 2026-03-28 00:02:21.700768 | orchestrator | + flavor_id = (known after apply) 2026-03-28 00:02:21.700772 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-28 00:02:21.700776 | orchestrator | + force_delete = false 2026-03-28 00:02:21.700779 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-28 00:02:21.700783 | orchestrator | + id = (known after apply) 2026-03-28 00:02:21.700787 | orchestrator | + image_id = (known after apply) 2026-03-28 00:02:21.700790 | orchestrator | + image_name = (known after apply) 2026-03-28 00:02:21.700794 | orchestrator | + key_pair = "testbed" 2026-03-28 00:02:21.700797 | orchestrator | + name = "testbed-node-1" 2026-03-28 00:02:21.700801 | orchestrator | + power_state = "active" 2026-03-28 00:02:21.700805 | orchestrator | + region = (known after apply) 2026-03-28 00:02:21.700808 | orchestrator | + security_groups = (known after apply) 2026-03-28 00:02:21.700812 | orchestrator | + stop_before_destroy = false 2026-03-28 00:02:21.700816 | orchestrator | + updated = (known after apply) 2026-03-28 00:02:21.700820 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-28 00:02:21.700823 | orchestrator | 2026-03-28 00:02:21.700827 | orchestrator | + block_device { 2026-03-28 00:02:21.700831 | orchestrator | + boot_index = 0 2026-03-28 00:02:21.700834 | orchestrator | + delete_on_termination = false 2026-03-28 00:02:21.700838 | orchestrator | + destination_type = "volume" 2026-03-28 00:02:21.700842 | orchestrator | + multiattach = false 2026-03-28 00:02:21.700845 | orchestrator | + source_type = "volume" 2026-03-28 00:02:21.700849 | orchestrator | + uuid = (known after apply) 2026-03-28 00:02:21.700853 | orchestrator | } 2026-03-28 00:02:21.700856 | orchestrator | 2026-03-28 00:02:21.700860 | orchestrator | + network { 2026-03-28 00:02:21.700864 | orchestrator | + access_network = false 2026-03-28 00:02:21.700868 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-28 00:02:21.700871 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-28 00:02:21.700875 | orchestrator | + mac = (known after apply) 2026-03-28 00:02:21.700878 | orchestrator | + name = (known after apply) 2026-03-28 00:02:21.700882 | orchestrator | + port = (known after apply) 2026-03-28 00:02:21.700886 | orchestrator | + uuid = (known after apply) 2026-03-28 00:02:21.700890 | orchestrator | } 2026-03-28 00:02:21.700893 | orchestrator | } 2026-03-28 00:02:21.700899 | orchestrator | 2026-03-28 00:02:21.700902 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-03-28 00:02:21.700906 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-28 00:02:21.700910 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-28 00:02:21.700913 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-28 00:02:21.700917 | orchestrator | + all_metadata = (known after apply) 2026-03-28 00:02:21.700921 | orchestrator | + all_tags = (known after apply) 2026-03-28 00:02:21.700927 | orchestrator | + availability_zone = "nova" 2026-03-28 00:02:21.700931 | orchestrator | + config_drive = true 2026-03-28 00:02:21.700935 | orchestrator | + created = (known after apply) 2026-03-28 00:02:21.700938 | orchestrator | + flavor_id = (known after apply) 2026-03-28 00:02:21.700942 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-28 00:02:21.700946 | orchestrator | + force_delete = false 2026-03-28 00:02:21.700949 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-28 00:02:21.700953 | orchestrator | + id = (known after apply) 2026-03-28 00:02:21.700956 | orchestrator | + image_id = (known after apply) 2026-03-28 00:02:21.700964 | orchestrator | + image_name = (known after apply) 2026-03-28 00:02:21.700967 | orchestrator | + key_pair = "testbed" 2026-03-28 00:02:21.700971 | orchestrator | + name = "testbed-node-2" 2026-03-28 00:02:21.700975 | orchestrator | + power_state = "active" 2026-03-28 00:02:21.700978 | orchestrator | + region = (known after apply) 2026-03-28 00:02:21.700982 | orchestrator | + security_groups = (known after apply) 2026-03-28 00:02:21.700986 | orchestrator | + stop_before_destroy = false 2026-03-28 00:02:21.700989 | orchestrator | + updated = (known after apply) 2026-03-28 00:02:21.700993 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-28 00:02:21.700997 | orchestrator | 2026-03-28 00:02:21.701000 | orchestrator | + block_device { 2026-03-28 00:02:21.701004 | orchestrator | + boot_index = 0 2026-03-28 00:02:21.701008 | orchestrator | + delete_on_termination = false 2026-03-28 00:02:21.701011 | orchestrator | + destination_type = "volume" 2026-03-28 00:02:21.701015 | orchestrator | + multiattach = false 2026-03-28 00:02:21.701019 | orchestrator | + source_type = "volume" 2026-03-28 00:02:21.701022 | orchestrator | + uuid = (known after apply) 2026-03-28 00:02:21.701026 | orchestrator | } 2026-03-28 00:02:21.701030 | orchestrator | 2026-03-28 00:02:21.701033 | orchestrator | + network { 2026-03-28 00:02:21.701037 | orchestrator | + access_network = false 2026-03-28 00:02:21.701041 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-28 00:02:21.701044 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-28 00:02:21.701048 | orchestrator | + mac = (known after apply) 2026-03-28 00:02:21.701052 | orchestrator | + name = (known after apply) 2026-03-28 00:02:21.701055 | orchestrator | + port = (known after apply) 2026-03-28 00:02:21.701059 | orchestrator | + uuid = (known after apply) 2026-03-28 00:02:21.701063 | orchestrator | } 2026-03-28 00:02:21.701066 | orchestrator | } 2026-03-28 00:02:21.701072 | orchestrator | 2026-03-28 00:02:21.701076 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-03-28 00:02:21.701079 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-28 00:02:21.701083 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-28 00:02:21.701087 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-28 00:02:21.701090 | orchestrator | + all_metadata = (known after apply) 2026-03-28 00:02:21.701094 | orchestrator | + all_tags = (known after apply) 2026-03-28 00:02:21.701098 | orchestrator | + availability_zone = "nova" 2026-03-28 00:02:21.701101 | orchestrator | + config_drive = true 2026-03-28 00:02:21.701105 | orchestrator | + created = (known after apply) 2026-03-28 00:02:21.701109 | orchestrator | + flavor_id = (known after apply) 2026-03-28 00:02:21.701112 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-28 00:02:21.701116 | orchestrator | + force_delete = false 2026-03-28 00:02:21.701119 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-28 00:02:21.701123 | orchestrator | + id = (known after apply) 2026-03-28 00:02:21.701127 | orchestrator | + image_id = (known after apply) 2026-03-28 00:02:21.701130 | orchestrator | + image_name = (known after apply) 2026-03-28 00:02:21.701134 | orchestrator | + key_pair = "testbed" 2026-03-28 00:02:21.701138 | orchestrator | + name = "testbed-node-3" 2026-03-28 00:02:21.701141 | orchestrator | + power_state = "active" 2026-03-28 00:02:21.701145 | orchestrator | + region = (known after apply) 2026-03-28 00:02:21.701149 | orchestrator | + security_groups = (known after apply) 2026-03-28 00:02:21.701152 | orchestrator | + stop_before_destroy = false 2026-03-28 00:02:21.701156 | orchestrator | + updated = (known after apply) 2026-03-28 00:02:21.701159 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-28 00:02:21.701163 | orchestrator | 2026-03-28 00:02:21.701167 | orchestrator | + block_device { 2026-03-28 00:02:21.701173 | orchestrator | + boot_index = 0 2026-03-28 00:02:21.701177 | orchestrator | + delete_on_termination = false 2026-03-28 00:02:21.701181 | orchestrator | + destination_type = "volume" 2026-03-28 00:02:21.701189 | orchestrator | + multiattach = false 2026-03-28 00:02:21.701193 | orchestrator | + source_type = "volume" 2026-03-28 00:02:21.701196 | orchestrator | + uuid = (known after apply) 2026-03-28 00:02:21.701200 | orchestrator | } 2026-03-28 00:02:21.701204 | orchestrator | 2026-03-28 00:02:21.701207 | orchestrator | + network { 2026-03-28 00:02:21.701211 | orchestrator | + access_network = false 2026-03-28 00:02:21.701215 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-28 00:02:21.701218 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-28 00:02:21.701222 | orchestrator | + mac = (known after apply) 2026-03-28 00:02:21.701226 | orchestrator | + name = (known after apply) 2026-03-28 00:02:21.701229 | orchestrator | + port = (known after apply) 2026-03-28 00:02:21.701233 | orchestrator | + uuid = (known after apply) 2026-03-28 00:02:21.701237 | orchestrator | } 2026-03-28 00:02:21.701240 | orchestrator | } 2026-03-28 00:02:21.701246 | orchestrator | 2026-03-28 00:02:21.701250 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-03-28 00:02:21.701254 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-28 00:02:21.701257 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-28 00:02:21.701261 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-28 00:02:21.701265 | orchestrator | + all_metadata = (known after apply) 2026-03-28 00:02:21.701268 | orchestrator | + all_tags = (known after apply) 2026-03-28 00:02:21.701272 | orchestrator | + availability_zone = "nova" 2026-03-28 00:02:21.701276 | orchestrator | + config_drive = true 2026-03-28 00:02:21.701279 | orchestrator | + created = (known after apply) 2026-03-28 00:02:21.701283 | orchestrator | + flavor_id = (known after apply) 2026-03-28 00:02:21.701287 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-28 00:02:21.701290 | orchestrator | + force_delete = false 2026-03-28 00:02:21.701294 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-28 00:02:21.701297 | orchestrator | + id = (known after apply) 2026-03-28 00:02:21.701301 | orchestrator | + image_id = (known after apply) 2026-03-28 00:02:21.701305 | orchestrator | + image_name = (known after apply) 2026-03-28 00:02:21.701308 | orchestrator | + key_pair = "testbed" 2026-03-28 00:02:21.701312 | orchestrator | + name = "testbed-node-4" 2026-03-28 00:02:21.701316 | orchestrator | + power_state = "active" 2026-03-28 00:02:21.701319 | orchestrator | + region = (known after apply) 2026-03-28 00:02:21.701323 | orchestrator | + security_groups = (known after apply) 2026-03-28 00:02:21.701327 | orchestrator | + stop_before_destroy = false 2026-03-28 00:02:21.701330 | orchestrator | + updated = (known after apply) 2026-03-28 00:02:21.701334 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-28 00:02:21.701338 | orchestrator | 2026-03-28 00:02:21.701341 | orchestrator | + block_device { 2026-03-28 00:02:21.701345 | orchestrator | + boot_index = 0 2026-03-28 00:02:21.701349 | orchestrator | + delete_on_termination = false 2026-03-28 00:02:21.701352 | orchestrator | + destination_type = "volume" 2026-03-28 00:02:21.701356 | orchestrator | + multiattach = false 2026-03-28 00:02:21.701360 | orchestrator | + source_type = "volume" 2026-03-28 00:02:21.701363 | orchestrator | + uuid = (known after apply) 2026-03-28 00:02:21.701367 | orchestrator | } 2026-03-28 00:02:21.701388 | orchestrator | 2026-03-28 00:02:21.701391 | orchestrator | + network { 2026-03-28 00:02:21.701395 | orchestrator | + access_network = false 2026-03-28 00:02:21.701399 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-28 00:02:21.701403 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-28 00:02:21.701406 | orchestrator | + mac = (known after apply) 2026-03-28 00:02:21.701410 | orchestrator | + name = (known after apply) 2026-03-28 00:02:21.701414 | orchestrator | + port = (known after apply) 2026-03-28 00:02:21.701417 | orchestrator | + uuid = (known after apply) 2026-03-28 00:02:21.701421 | orchestrator | } 2026-03-28 00:02:21.701424 | orchestrator | } 2026-03-28 00:02:21.701597 | orchestrator | 2026-03-28 00:02:21.701677 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-03-28 00:02:21.701691 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-28 00:02:21.701701 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-28 00:02:21.701711 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-28 00:02:21.701720 | orchestrator | + all_metadata = (known after apply) 2026-03-28 00:02:21.701730 | orchestrator | + all_tags = (known after apply) 2026-03-28 00:02:21.701739 | orchestrator | + availability_zone = "nova" 2026-03-28 00:02:21.701749 | orchestrator | + config_drive = true 2026-03-28 00:02:21.701758 | orchestrator | + created = (known after apply) 2026-03-28 00:02:21.701767 | orchestrator | + flavor_id = (known after apply) 2026-03-28 00:02:21.701777 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-28 00:02:21.701786 | orchestrator | + force_delete = false 2026-03-28 00:02:21.701812 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-28 00:02:21.701822 | orchestrator | + id = (known after apply) 2026-03-28 00:02:21.701831 | orchestrator | + image_id = (known after apply) 2026-03-28 00:02:21.701840 | orchestrator | + image_name = (known after apply) 2026-03-28 00:02:21.701849 | orchestrator | + key_pair = "testbed" 2026-03-28 00:02:21.701859 | orchestrator | + name = "testbed-node-5" 2026-03-28 00:02:21.701868 | orchestrator | + power_state = "active" 2026-03-28 00:02:21.701877 | orchestrator | + region = (known after apply) 2026-03-28 00:02:21.701886 | orchestrator | + security_groups = (known after apply) 2026-03-28 00:02:21.701895 | orchestrator | + stop_before_destroy = false 2026-03-28 00:02:21.701905 | orchestrator | + updated = (known after apply) 2026-03-28 00:02:21.701914 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-28 00:02:21.701924 | orchestrator | 2026-03-28 00:02:21.701933 | orchestrator | + block_device { 2026-03-28 00:02:21.701943 | orchestrator | + boot_index = 0 2026-03-28 00:02:21.701953 | orchestrator | + delete_on_termination = false 2026-03-28 00:02:21.701962 | orchestrator | + destination_type = "volume" 2026-03-28 00:02:21.701971 | orchestrator | + multiattach = false 2026-03-28 00:02:21.701980 | orchestrator | + source_type = "volume" 2026-03-28 00:02:21.701989 | orchestrator | + uuid = (known after apply) 2026-03-28 00:02:21.701999 | orchestrator | } 2026-03-28 00:02:21.702008 | orchestrator | 2026-03-28 00:02:21.702051 | orchestrator | + network { 2026-03-28 00:02:21.702063 | orchestrator | + access_network = false 2026-03-28 00:02:21.702072 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-28 00:02:21.702081 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-28 00:02:21.702091 | orchestrator | + mac = (known after apply) 2026-03-28 00:02:21.702101 | orchestrator | + name = (known after apply) 2026-03-28 00:02:21.702110 | orchestrator | + port = (known after apply) 2026-03-28 00:02:21.702119 | orchestrator | + uuid = (known after apply) 2026-03-28 00:02:21.702128 | orchestrator | } 2026-03-28 00:02:21.702138 | orchestrator | } 2026-03-28 00:02:21.702148 | orchestrator | 2026-03-28 00:02:21.702157 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-03-28 00:02:21.702167 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-03-28 00:02:21.702176 | orchestrator | + fingerprint = (known after apply) 2026-03-28 00:02:21.702185 | orchestrator | + id = (known after apply) 2026-03-28 00:02:21.702195 | orchestrator | + name = "testbed" 2026-03-28 00:02:21.702204 | orchestrator | + private_key = (sensitive value) 2026-03-28 00:02:21.702213 | orchestrator | + public_key = (known after apply) 2026-03-28 00:02:21.702222 | orchestrator | + region = (known after apply) 2026-03-28 00:02:21.702232 | orchestrator | + user_id = (known after apply) 2026-03-28 00:02:21.702241 | orchestrator | } 2026-03-28 00:02:21.702250 | orchestrator | 2026-03-28 00:02:21.702260 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-03-28 00:02:21.702269 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-28 00:02:21.702296 | orchestrator | + device = (known after apply) 2026-03-28 00:02:21.702306 | orchestrator | + id = (known after apply) 2026-03-28 00:02:21.702315 | orchestrator | + instance_id = (known after apply) 2026-03-28 00:02:21.702324 | orchestrator | + region = (known after apply) 2026-03-28 00:02:21.702334 | orchestrator | + volume_id = (known after apply) 2026-03-28 00:02:21.702343 | orchestrator | } 2026-03-28 00:02:21.702367 | orchestrator | 2026-03-28 00:02:21.702415 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-03-28 00:02:21.702431 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-28 00:02:21.702448 | orchestrator | + device = (known after apply) 2026-03-28 00:02:21.702464 | orchestrator | + id = (known after apply) 2026-03-28 00:02:21.702478 | orchestrator | + instance_id = (known after apply) 2026-03-28 00:02:21.702487 | orchestrator | + region = (known after apply) 2026-03-28 00:02:21.702497 | orchestrator | + volume_id = (known after apply) 2026-03-28 00:02:21.702506 | orchestrator | } 2026-03-28 00:02:21.702516 | orchestrator | 2026-03-28 00:02:21.702526 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-03-28 00:02:21.702535 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-28 00:02:21.702544 | orchestrator | + device = (known after apply) 2026-03-28 00:02:21.702554 | orchestrator | + id = (known after apply) 2026-03-28 00:02:21.702563 | orchestrator | + instance_id = (known after apply) 2026-03-28 00:02:21.702573 | orchestrator | + region = (known after apply) 2026-03-28 00:02:21.702582 | orchestrator | + volume_id = (known after apply) 2026-03-28 00:02:21.702591 | orchestrator | } 2026-03-28 00:02:21.702601 | orchestrator | 2026-03-28 00:02:21.702610 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-03-28 00:02:21.702619 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-28 00:02:21.702629 | orchestrator | + device = (known after apply) 2026-03-28 00:02:21.702638 | orchestrator | + id = (known after apply) 2026-03-28 00:02:21.702647 | orchestrator | + instance_id = (known after apply) 2026-03-28 00:02:21.702657 | orchestrator | + region = (known after apply) 2026-03-28 00:02:21.702666 | orchestrator | + volume_id = (known after apply) 2026-03-28 00:02:21.702675 | orchestrator | } 2026-03-28 00:02:21.702685 | orchestrator | 2026-03-28 00:02:21.702694 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-03-28 00:02:21.702704 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-28 00:02:21.702714 | orchestrator | + device = (known after apply) 2026-03-28 00:02:21.702723 | orchestrator | + id = (known after apply) 2026-03-28 00:02:21.702732 | orchestrator | + instance_id = (known after apply) 2026-03-28 00:02:21.702748 | orchestrator | + region = (known after apply) 2026-03-28 00:02:21.702758 | orchestrator | + volume_id = (known after apply) 2026-03-28 00:02:21.702768 | orchestrator | } 2026-03-28 00:02:21.702777 | orchestrator | 2026-03-28 00:02:21.702787 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-03-28 00:02:21.702796 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-28 00:02:21.702805 | orchestrator | + device = (known after apply) 2026-03-28 00:02:21.702815 | orchestrator | + id = (known after apply) 2026-03-28 00:02:21.702824 | orchestrator | + instance_id = (known after apply) 2026-03-28 00:02:21.702833 | orchestrator | + region = (known after apply) 2026-03-28 00:02:21.702842 | orchestrator | + volume_id = (known after apply) 2026-03-28 00:02:21.702852 | orchestrator | } 2026-03-28 00:02:21.702861 | orchestrator | 2026-03-28 00:02:21.702871 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-03-28 00:02:21.702880 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-28 00:02:21.702889 | orchestrator | + device = (known after apply) 2026-03-28 00:02:21.702899 | orchestrator | + id = (known after apply) 2026-03-28 00:02:21.702908 | orchestrator | + instance_id = (known after apply) 2026-03-28 00:02:21.702917 | orchestrator | + region = (known after apply) 2026-03-28 00:02:21.702940 | orchestrator | + volume_id = (known after apply) 2026-03-28 00:02:21.702950 | orchestrator | } 2026-03-28 00:02:21.702959 | orchestrator | 2026-03-28 00:02:21.702969 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-03-28 00:02:21.702979 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-28 00:02:21.702988 | orchestrator | + device = (known after apply) 2026-03-28 00:02:21.702998 | orchestrator | + id = (known after apply) 2026-03-28 00:02:21.703007 | orchestrator | + instance_id = (known after apply) 2026-03-28 00:02:21.703017 | orchestrator | + region = (known after apply) 2026-03-28 00:02:21.703026 | orchestrator | + volume_id = (known after apply) 2026-03-28 00:02:21.703035 | orchestrator | } 2026-03-28 00:02:21.703045 | orchestrator | 2026-03-28 00:02:21.703055 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-03-28 00:02:21.703064 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-28 00:02:21.703074 | orchestrator | + device = (known after apply) 2026-03-28 00:02:21.703083 | orchestrator | + id = (known after apply) 2026-03-28 00:02:21.703092 | orchestrator | + instance_id = (known after apply) 2026-03-28 00:02:21.703101 | orchestrator | + region = (known after apply) 2026-03-28 00:02:21.703111 | orchestrator | + volume_id = (known after apply) 2026-03-28 00:02:21.703120 | orchestrator | } 2026-03-28 00:02:21.703130 | orchestrator | 2026-03-28 00:02:21.703139 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-03-28 00:02:21.703149 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-03-28 00:02:21.703159 | orchestrator | + fixed_ip = (known after apply) 2026-03-28 00:02:21.703168 | orchestrator | + floating_ip = (known after apply) 2026-03-28 00:02:21.703178 | orchestrator | + id = (known after apply) 2026-03-28 00:02:21.703187 | orchestrator | + port_id = (known after apply) 2026-03-28 00:02:21.703196 | orchestrator | + region = (known after apply) 2026-03-28 00:02:21.703206 | orchestrator | } 2026-03-28 00:02:21.703215 | orchestrator | 2026-03-28 00:02:21.703225 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-03-28 00:02:21.703234 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-03-28 00:02:21.703244 | orchestrator | + address = (known after apply) 2026-03-28 00:02:21.703253 | orchestrator | + all_tags = (known after apply) 2026-03-28 00:02:21.703263 | orchestrator | + dns_domain = (known after apply) 2026-03-28 00:02:21.703272 | orchestrator | + dns_name = (known after apply) 2026-03-28 00:02:21.703281 | orchestrator | + fixed_ip = (known after apply) 2026-03-28 00:02:21.703290 | orchestrator | + id = (known after apply) 2026-03-28 00:02:21.703300 | orchestrator | + pool = "public" 2026-03-28 00:02:21.703309 | orchestrator | + port_id = (known after apply) 2026-03-28 00:02:21.703319 | orchestrator | + region = (known after apply) 2026-03-28 00:02:21.703328 | orchestrator | + subnet_id = (known after apply) 2026-03-28 00:02:21.703338 | orchestrator | + tenant_id = (known after apply) 2026-03-28 00:02:21.703347 | orchestrator | } 2026-03-28 00:02:21.703357 | orchestrator | 2026-03-28 00:02:21.703367 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-03-28 00:02:21.703405 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-03-28 00:02:21.703415 | orchestrator | + admin_state_up = (known after apply) 2026-03-28 00:02:21.703425 | orchestrator | + all_tags = (known after apply) 2026-03-28 00:02:21.703435 | orchestrator | + availability_zone_hints = [ 2026-03-28 00:02:21.703444 | orchestrator | + "nova", 2026-03-28 00:02:21.703454 | orchestrator | ] 2026-03-28 00:02:21.703463 | orchestrator | + dns_domain = (known after apply) 2026-03-28 00:02:21.703473 | orchestrator | + external = (known after apply) 2026-03-28 00:02:21.703482 | orchestrator | + id = (known after apply) 2026-03-28 00:02:21.703492 | orchestrator | + mtu = (known after apply) 2026-03-28 00:02:21.703501 | orchestrator | + name = "net-testbed-management" 2026-03-28 00:02:21.703511 | orchestrator | + port_security_enabled = (known after apply) 2026-03-28 00:02:21.703528 | orchestrator | + qos_policy_id = (known after apply) 2026-03-28 00:02:21.703538 | orchestrator | + region = (known after apply) 2026-03-28 00:02:21.703547 | orchestrator | + shared = (known after apply) 2026-03-28 00:02:21.703557 | orchestrator | + tenant_id = (known after apply) 2026-03-28 00:02:21.703567 | orchestrator | + transparent_vlan = (known after apply) 2026-03-28 00:02:21.703576 | orchestrator | 2026-03-28 00:02:21.703586 | orchestrator | + segments (known after apply) 2026-03-28 00:02:21.703596 | orchestrator | } 2026-03-28 00:02:21.703605 | orchestrator | 2026-03-28 00:02:21.703615 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-03-28 00:02:21.703624 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-03-28 00:02:21.703634 | orchestrator | + admin_state_up = (known after apply) 2026-03-28 00:02:21.703644 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-28 00:02:21.703653 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-28 00:02:21.703668 | orchestrator | + all_tags = (known after apply) 2026-03-28 00:02:21.703677 | orchestrator | + device_id = (known after apply) 2026-03-28 00:02:21.703687 | orchestrator | + device_owner = (known after apply) 2026-03-28 00:02:21.703696 | orchestrator | + dns_assignment = (known after apply) 2026-03-28 00:02:21.703706 | orchestrator | + dns_name = (known after apply) 2026-03-28 00:02:21.703715 | orchestrator | + id = (known after apply) 2026-03-28 00:02:21.703725 | orchestrator | + mac_address = (known after apply) 2026-03-28 00:02:21.703734 | orchestrator | + network_id = (known after apply) 2026-03-28 00:02:21.703743 | orchestrator | + port_security_enabled = (known after apply) 2026-03-28 00:02:21.703753 | orchestrator | + qos_policy_id = (known after apply) 2026-03-28 00:02:21.703762 | orchestrator | + region = (known after apply) 2026-03-28 00:02:21.703771 | orchestrator | + security_group_ids = (known after apply) 2026-03-28 00:02:21.703781 | orchestrator | + tenant_id = (known after apply) 2026-03-28 00:02:21.703790 | orchestrator | 2026-03-28 00:02:21.703800 | orchestrator | + allowed_address_pairs { 2026-03-28 00:02:21.703810 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-28 00:02:21.703819 | orchestrator | } 2026-03-28 00:02:21.703828 | orchestrator | 2026-03-28 00:02:21.703838 | orchestrator | + binding (known after apply) 2026-03-28 00:02:21.703847 | orchestrator | 2026-03-28 00:02:21.703856 | orchestrator | + fixed_ip { 2026-03-28 00:02:21.703866 | orchestrator | + ip_address = "192.168.16.5" 2026-03-28 00:02:21.703876 | orchestrator | + subnet_id = (known after apply) 2026-03-28 00:02:21.703885 | orchestrator | } 2026-03-28 00:02:21.703894 | orchestrator | } 2026-03-28 00:02:21.703904 | orchestrator | 2026-03-28 00:02:21.703913 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-03-28 00:02:21.703923 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-28 00:02:21.703932 | orchestrator | + admin_state_up = (known after apply) 2026-03-28 00:02:21.703942 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-28 00:02:21.703951 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-28 00:02:21.703961 | orchestrator | + all_tags = (known after apply) 2026-03-28 00:02:21.703970 | orchestrator | + device_id = (known after apply) 2026-03-28 00:02:21.703980 | orchestrator | + device_owner = (known after apply) 2026-03-28 00:02:21.703990 | orchestrator | + dns_assignment = (known after apply) 2026-03-28 00:02:21.703999 | orchestrator | + dns_name = (known after apply) 2026-03-28 00:02:21.704008 | orchestrator | + id = (known after apply) 2026-03-28 00:02:21.704018 | orchestrator | + mac_address = (known after apply) 2026-03-28 00:02:21.704027 | orchestrator | + network_id = (known after apply) 2026-03-28 00:02:21.704037 | orchestrator | + port_security_enabled = (known after apply) 2026-03-28 00:02:21.704046 | orchestrator | + qos_policy_id = (known after apply) 2026-03-28 00:02:21.704056 | orchestrator | + region = (known after apply) 2026-03-28 00:02:21.704072 | orchestrator | + security_group_ids = (known after apply) 2026-03-28 00:02:21.704081 | orchestrator | + tenant_id = (known after apply) 2026-03-28 00:02:21.704090 | orchestrator | 2026-03-28 00:02:21.704100 | orchestrator | + allowed_address_pairs { 2026-03-28 00:02:21.704109 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-28 00:02:21.704119 | orchestrator | } 2026-03-28 00:02:21.704128 | orchestrator | + allowed_address_pairs { 2026-03-28 00:02:21.704138 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-28 00:02:21.704147 | orchestrator | } 2026-03-28 00:02:21.704157 | orchestrator | + allowed_address_pairs { 2026-03-28 00:02:21.704166 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-28 00:02:21.704175 | orchestrator | } 2026-03-28 00:02:21.704185 | orchestrator | 2026-03-28 00:02:21.704195 | orchestrator | + binding (known after apply) 2026-03-28 00:02:21.704204 | orchestrator | 2026-03-28 00:02:21.704214 | orchestrator | + fixed_ip { 2026-03-28 00:02:21.704223 | orchestrator | + ip_address = "192.168.16.10" 2026-03-28 00:02:21.704233 | orchestrator | + subnet_id = (known after apply) 2026-03-28 00:02:21.704242 | orchestrator | } 2026-03-28 00:02:21.704251 | orchestrator | } 2026-03-28 00:02:21.704261 | orchestrator | 2026-03-28 00:02:21.704270 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-03-28 00:02:21.704280 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-28 00:02:21.704289 | orchestrator | + admin_state_up = (known after apply) 2026-03-28 00:02:21.704299 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-28 00:02:21.704308 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-28 00:02:21.704318 | orchestrator | + all_tags = (known after apply) 2026-03-28 00:02:21.704327 | orchestrator | + device_id = (known after apply) 2026-03-28 00:02:21.704337 | orchestrator | + device_owner = (known after apply) 2026-03-28 00:02:21.704346 | orchestrator | + dns_assignment = (known after apply) 2026-03-28 00:02:21.704356 | orchestrator | + dns_name = (known after apply) 2026-03-28 00:02:21.704407 | orchestrator | + id = (known after apply) 2026-03-28 00:02:21.704418 | orchestrator | + mac_address = (known after apply) 2026-03-28 00:02:21.704428 | orchestrator | + network_id = (known after apply) 2026-03-28 00:02:21.704437 | orchestrator | + port_security_enabled = (known after apply) 2026-03-28 00:02:21.704447 | orchestrator | + qos_policy_id = (known after apply) 2026-03-28 00:02:21.704456 | orchestrator | + region = (known after apply) 2026-03-28 00:02:21.704466 | orchestrator | + security_group_ids = (known after apply) 2026-03-28 00:02:21.704475 | orchestrator | + tenant_id = (known after apply) 2026-03-28 00:02:21.704485 | orchestrator | 2026-03-28 00:02:21.704494 | orchestrator | + allowed_address_pairs { 2026-03-28 00:02:21.704504 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-28 00:02:21.704514 | orchestrator | } 2026-03-28 00:02:21.704524 | orchestrator | + allowed_address_pairs { 2026-03-28 00:02:21.704533 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-28 00:02:21.704543 | orchestrator | } 2026-03-28 00:02:21.704552 | orchestrator | + allowed_address_pairs { 2026-03-28 00:02:21.704561 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-28 00:02:21.704571 | orchestrator | } 2026-03-28 00:02:21.704581 | orchestrator | 2026-03-28 00:02:21.704590 | orchestrator | + binding (known after apply) 2026-03-28 00:02:21.704600 | orchestrator | 2026-03-28 00:02:21.704609 | orchestrator | + fixed_ip { 2026-03-28 00:02:21.704619 | orchestrator | + ip_address = "192.168.16.11" 2026-03-28 00:02:21.704628 | orchestrator | + subnet_id = (known after apply) 2026-03-28 00:02:21.704638 | orchestrator | } 2026-03-28 00:02:21.704647 | orchestrator | } 2026-03-28 00:02:21.704656 | orchestrator | 2026-03-28 00:02:21.704666 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-03-28 00:02:21.704676 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-28 00:02:21.704685 | orchestrator | + admin_state_up = (known after apply) 2026-03-28 00:02:21.704695 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-28 00:02:21.704705 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-28 00:02:21.704719 | orchestrator | + all_tags = (known after apply) 2026-03-28 00:02:21.704746 | orchestrator | + device_id = (known after apply) 2026-03-28 00:02:21.704764 | orchestrator | + device_owner = (known after apply) 2026-03-28 00:02:21.704782 | orchestrator | + dns_assignment = (known after apply) 2026-03-28 00:02:21.704799 | orchestrator | + dns_name = (known after apply) 2026-03-28 00:02:21.704823 | orchestrator | + id = (known after apply) 2026-03-28 00:02:21.704842 | orchestrator | + mac_address = (known after apply) 2026-03-28 00:02:21.704861 | orchestrator | + network_id = (known after apply) 2026-03-28 00:02:21.704881 | orchestrator | + port_security_enabled = (known after apply) 2026-03-28 00:02:21.704898 | orchestrator | + qos_policy_id = (known after apply) 2026-03-28 00:02:21.704915 | orchestrator | + region = (known after apply) 2026-03-28 00:02:21.704925 | orchestrator | + security_group_ids = (known after apply) 2026-03-28 00:02:21.704935 | orchestrator | + tenant_id = (known after apply) 2026-03-28 00:02:21.704944 | orchestrator | 2026-03-28 00:02:21.704953 | orchestrator | + allowed_address_pairs { 2026-03-28 00:02:21.704963 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-28 00:02:21.704972 | orchestrator | } 2026-03-28 00:02:21.704982 | orchestrator | + allowed_address_pairs { 2026-03-28 00:02:21.704991 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-28 00:02:21.705001 | orchestrator | } 2026-03-28 00:02:21.705010 | orchestrator | + allowed_address_pairs { 2026-03-28 00:02:21.705020 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-28 00:02:21.705029 | orchestrator | } 2026-03-28 00:02:21.705038 | orchestrator | 2026-03-28 00:02:21.705048 | orchestrator | + binding (known after apply) 2026-03-28 00:02:21.705057 | orchestrator | 2026-03-28 00:02:21.705066 | orchestrator | + fixed_ip { 2026-03-28 00:02:21.705081 | orchestrator | + ip_address = "192.168.16.12" 2026-03-28 00:02:21.705098 | orchestrator | + subnet_id = (known after apply) 2026-03-28 00:02:21.705114 | orchestrator | } 2026-03-28 00:02:21.705130 | orchestrator | } 2026-03-28 00:02:21.705144 | orchestrator | 2026-03-28 00:02:21.705160 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-03-28 00:02:21.705176 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-28 00:02:21.705194 | orchestrator | + admin_state_up = (known after apply) 2026-03-28 00:02:21.705210 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-28 00:02:21.705228 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-28 00:02:21.705238 | orchestrator | + all_tags = (known after apply) 2026-03-28 00:02:21.705248 | orchestrator | + device_id = (known after apply) 2026-03-28 00:02:21.705258 | orchestrator | + device_owner = (known after apply) 2026-03-28 00:02:21.705268 | orchestrator | + dns_assignment = (known after apply) 2026-03-28 00:02:21.705277 | orchestrator | + dns_name = (known after apply) 2026-03-28 00:02:21.705287 | orchestrator | + id = (known after apply) 2026-03-28 00:02:21.705296 | orchestrator | + mac_address = (known after apply) 2026-03-28 00:02:21.705306 | orchestrator | + network_id = (known after apply) 2026-03-28 00:02:21.705316 | orchestrator | + port_security_enabled = (known after apply) 2026-03-28 00:02:21.705325 | orchestrator | + qos_policy_id = (known after apply) 2026-03-28 00:02:21.705335 | orchestrator | + region = (known after apply) 2026-03-28 00:02:21.705356 | orchestrator | + security_group_ids = (known after apply) 2026-03-28 00:02:21.705366 | orchestrator | + tenant_id = (known after apply) 2026-03-28 00:02:21.705502 | orchestrator | 2026-03-28 00:02:21.705519 | orchestrator | + allowed_address_pairs { 2026-03-28 00:02:21.705529 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-28 00:02:21.705538 | orchestrator | } 2026-03-28 00:02:21.705545 | orchestrator | + allowed_address_pairs { 2026-03-28 00:02:21.705553 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-28 00:02:21.705561 | orchestrator | } 2026-03-28 00:02:21.705569 | orchestrator | + allowed_address_pairs { 2026-03-28 00:02:21.705577 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-28 00:02:21.705585 | orchestrator | } 2026-03-28 00:02:21.705593 | orchestrator | 2026-03-28 00:02:21.705610 | orchestrator | + binding (known after apply) 2026-03-28 00:02:21.705617 | orchestrator | 2026-03-28 00:02:21.705625 | orchestrator | + fixed_ip { 2026-03-28 00:02:21.705633 | orchestrator | + ip_address = "192.168.16.13" 2026-03-28 00:02:21.705641 | orchestrator | + subnet_id = (known after apply) 2026-03-28 00:02:21.705648 | orchestrator | } 2026-03-28 00:02:21.705656 | orchestrator | } 2026-03-28 00:02:21.705664 | orchestrator | 2026-03-28 00:02:21.705671 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-03-28 00:02:21.705679 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-28 00:02:21.705687 | orchestrator | + admin_state_up = (known after apply) 2026-03-28 00:02:21.705695 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-28 00:02:21.705703 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-28 00:02:21.705711 | orchestrator | + all_tags = (known after apply) 2026-03-28 00:02:21.705719 | orchestrator | + device_id = (known after apply) 2026-03-28 00:02:21.705727 | orchestrator | + device_owner = (known after apply) 2026-03-28 00:02:21.705747 | orchestrator | + dns_assignment = (known after apply) 2026-03-28 00:02:21.705755 | orchestrator | + dns_name = (known after apply) 2026-03-28 00:02:21.705763 | orchestrator | + id = (known after apply) 2026-03-28 00:02:21.705771 | orchestrator | + mac_address = (known after apply) 2026-03-28 00:02:21.705778 | orchestrator | + network_id = (known after apply) 2026-03-28 00:02:21.705787 | orchestrator | + port_security_enabled = (known after apply) 2026-03-28 00:02:21.705794 | orchestrator | + qos_policy_id = (known after apply) 2026-03-28 00:02:21.705802 | orchestrator | + region = (known after apply) 2026-03-28 00:02:21.705810 | orchestrator | + security_group_ids = (known after apply) 2026-03-28 00:02:21.705817 | orchestrator | + tenant_id = (known after apply) 2026-03-28 00:02:21.705828 | orchestrator | 2026-03-28 00:02:21.705837 | orchestrator | + allowed_address_pairs { 2026-03-28 00:02:21.705845 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-28 00:02:21.705852 | orchestrator | } 2026-03-28 00:02:21.705860 | orchestrator | + allowed_address_pairs { 2026-03-28 00:02:21.705868 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-28 00:02:21.705876 | orchestrator | } 2026-03-28 00:02:21.705883 | orchestrator | + allowed_address_pairs { 2026-03-28 00:02:21.705891 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-28 00:02:21.705899 | orchestrator | } 2026-03-28 00:02:21.705907 | orchestrator | 2026-03-28 00:02:21.705915 | orchestrator | + binding (known after apply) 2026-03-28 00:02:21.705922 | orchestrator | 2026-03-28 00:02:21.705944 | orchestrator | + fixed_ip { 2026-03-28 00:02:21.705965 | orchestrator | + ip_address = "192.168.16.14" 2026-03-28 00:02:21.705973 | orchestrator | + subnet_id = (known after apply) 2026-03-28 00:02:21.705990 | orchestrator | } 2026-03-28 00:02:21.705998 | orchestrator | } 2026-03-28 00:02:21.706006 | orchestrator | 2026-03-28 00:02:21.706044 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-03-28 00:02:21.706054 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-28 00:02:21.706063 | orchestrator | + admin_state_up = (known after apply) 2026-03-28 00:02:21.706071 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-28 00:02:21.706078 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-28 00:02:21.706087 | orchestrator | + all_tags = (known after apply) 2026-03-28 00:02:21.706094 | orchestrator | + device_id = (known after apply) 2026-03-28 00:02:21.706102 | orchestrator | + device_owner = (known after apply) 2026-03-28 00:02:21.706110 | orchestrator | + dns_assignment = (known after apply) 2026-03-28 00:02:21.706118 | orchestrator | + dns_name = (known after apply) 2026-03-28 00:02:21.706125 | orchestrator | + id = (known after apply) 2026-03-28 00:02:21.706134 | orchestrator | + mac_address = (known after apply) 2026-03-28 00:02:21.706142 | orchestrator | + network_id = (known after apply) 2026-03-28 00:02:21.706149 | orchestrator | + port_security_enabled = (known after apply) 2026-03-28 00:02:21.706157 | orchestrator | + qos_policy_id = (known after apply) 2026-03-28 00:02:21.706171 | orchestrator | + region = (known after apply) 2026-03-28 00:02:21.706178 | orchestrator | + security_group_ids = (known after apply) 2026-03-28 00:02:21.706187 | orchestrator | + tenant_id = (known after apply) 2026-03-28 00:02:21.706194 | orchestrator | 2026-03-28 00:02:21.706202 | orchestrator | + allowed_address_pairs { 2026-03-28 00:02:21.706210 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-28 00:02:21.706218 | orchestrator | } 2026-03-28 00:02:21.706225 | orchestrator | + allowed_address_pairs { 2026-03-28 00:02:21.706233 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-28 00:02:21.706241 | orchestrator | } 2026-03-28 00:02:21.706249 | orchestrator | + allowed_address_pairs { 2026-03-28 00:02:21.706257 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-28 00:02:21.706265 | orchestrator | } 2026-03-28 00:02:21.706273 | orchestrator | 2026-03-28 00:02:21.706288 | orchestrator | + binding (known after apply) 2026-03-28 00:02:21.706296 | orchestrator | 2026-03-28 00:02:21.706304 | orchestrator | + fixed_ip { 2026-03-28 00:02:21.706312 | orchestrator | + ip_address = "192.168.16.15" 2026-03-28 00:02:21.706319 | orchestrator | + subnet_id = (known after apply) 2026-03-28 00:02:21.706327 | orchestrator | } 2026-03-28 00:02:21.706335 | orchestrator | } 2026-03-28 00:02:21.706343 | orchestrator | 2026-03-28 00:02:21.706350 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-03-28 00:02:21.706358 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-03-28 00:02:21.706366 | orchestrator | + force_destroy = false 2026-03-28 00:02:21.706397 | orchestrator | + id = (known after apply) 2026-03-28 00:02:21.706405 | orchestrator | + port_id = (known after apply) 2026-03-28 00:02:21.706412 | orchestrator | + region = (known after apply) 2026-03-28 00:02:21.706420 | orchestrator | + router_id = (known after apply) 2026-03-28 00:02:21.706428 | orchestrator | + subnet_id = (known after apply) 2026-03-28 00:02:21.706435 | orchestrator | } 2026-03-28 00:02:21.706443 | orchestrator | 2026-03-28 00:02:21.706451 | orchestrator | # openstack_networking_router_v2.router will be created 2026-03-28 00:02:21.706459 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-03-28 00:02:21.706467 | orchestrator | + admin_state_up = (known after apply) 2026-03-28 00:02:21.706474 | orchestrator | + all_tags = (known after apply) 2026-03-28 00:02:21.706482 | orchestrator | + availability_zone_hints = [ 2026-03-28 00:02:21.706490 | orchestrator | + "nova", 2026-03-28 00:02:21.706498 | orchestrator | ] 2026-03-28 00:02:21.706505 | orchestrator | + distributed = (known after apply) 2026-03-28 00:02:21.706513 | orchestrator | + enable_snat = (known after apply) 2026-03-28 00:02:21.706521 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-03-28 00:02:21.706528 | orchestrator | + external_qos_policy_id = (known after apply) 2026-03-28 00:02:21.706536 | orchestrator | + id = (known after apply) 2026-03-28 00:02:21.706544 | orchestrator | + name = "testbed" 2026-03-28 00:02:21.706552 | orchestrator | + region = (known after apply) 2026-03-28 00:02:21.706560 | orchestrator | + tenant_id = (known after apply) 2026-03-28 00:02:21.706568 | orchestrator | 2026-03-28 00:02:21.706576 | orchestrator | + external_fixed_ip (known after apply) 2026-03-28 00:02:21.706584 | orchestrator | } 2026-03-28 00:02:21.706592 | orchestrator | 2026-03-28 00:02:21.706599 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-03-28 00:02:21.706607 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-03-28 00:02:21.706615 | orchestrator | + description = "ssh" 2026-03-28 00:02:21.706623 | orchestrator | + direction = "ingress" 2026-03-28 00:02:21.706630 | orchestrator | + ethertype = "IPv4" 2026-03-28 00:02:21.706638 | orchestrator | + id = (known after apply) 2026-03-28 00:02:21.706646 | orchestrator | + port_range_max = 22 2026-03-28 00:02:21.706653 | orchestrator | + port_range_min = 22 2026-03-28 00:02:21.706661 | orchestrator | + protocol = "tcp" 2026-03-28 00:02:21.706668 | orchestrator | + region = (known after apply) 2026-03-28 00:02:21.706690 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-28 00:02:21.706698 | orchestrator | + remote_group_id = (known after apply) 2026-03-28 00:02:21.706706 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-28 00:02:21.706714 | orchestrator | + security_group_id = (known after apply) 2026-03-28 00:02:21.706722 | orchestrator | + tenant_id = (known after apply) 2026-03-28 00:02:21.706729 | orchestrator | } 2026-03-28 00:02:21.706737 | orchestrator | 2026-03-28 00:02:21.706745 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-03-28 00:02:21.706753 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-03-28 00:02:21.706760 | orchestrator | + description = "wireguard" 2026-03-28 00:02:21.706768 | orchestrator | + direction = "ingress" 2026-03-28 00:02:21.706775 | orchestrator | + ethertype = "IPv4" 2026-03-28 00:02:21.706783 | orchestrator | + id = (known after apply) 2026-03-28 00:02:21.706791 | orchestrator | + port_range_max = 51820 2026-03-28 00:02:21.706799 | orchestrator | + port_range_min = 51820 2026-03-28 00:02:21.706806 | orchestrator | + protocol = "udp" 2026-03-28 00:02:21.706814 | orchestrator | + region = (known after apply) 2026-03-28 00:02:21.706822 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-28 00:02:21.706829 | orchestrator | + remote_group_id = (known after apply) 2026-03-28 00:02:21.706837 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-28 00:02:21.706844 | orchestrator | + security_group_id = (known after apply) 2026-03-28 00:02:21.706852 | orchestrator | + tenant_id = (known after apply) 2026-03-28 00:02:21.706860 | orchestrator | } 2026-03-28 00:02:21.706867 | orchestrator | 2026-03-28 00:02:21.706875 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-03-28 00:02:21.706883 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-03-28 00:02:21.706890 | orchestrator | + direction = "ingress" 2026-03-28 00:02:21.706898 | orchestrator | + ethertype = "IPv4" 2026-03-28 00:02:21.706906 | orchestrator | + id = (known after apply) 2026-03-28 00:02:21.706913 | orchestrator | + protocol = "tcp" 2026-03-28 00:02:21.706921 | orchestrator | + region = (known after apply) 2026-03-28 00:02:21.706928 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-28 00:02:21.706936 | orchestrator | + remote_group_id = (known after apply) 2026-03-28 00:02:21.706943 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-28 00:02:21.706951 | orchestrator | + security_group_id = (known after apply) 2026-03-28 00:02:21.706959 | orchestrator | + tenant_id = (known after apply) 2026-03-28 00:02:21.706966 | orchestrator | } 2026-03-28 00:02:21.706974 | orchestrator | 2026-03-28 00:02:21.706982 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-03-28 00:02:21.706990 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-03-28 00:02:21.706997 | orchestrator | + direction = "ingress" 2026-03-28 00:02:21.707005 | orchestrator | + ethertype = "IPv4" 2026-03-28 00:02:21.707012 | orchestrator | + id = (known after apply) 2026-03-28 00:02:21.707020 | orchestrator | + protocol = "udp" 2026-03-28 00:02:21.707027 | orchestrator | + region = (known after apply) 2026-03-28 00:02:21.707035 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-28 00:02:21.707042 | orchestrator | + remote_group_id = (known after apply) 2026-03-28 00:02:21.707050 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-28 00:02:21.707058 | orchestrator | + security_group_id = (known after apply) 2026-03-28 00:02:21.707065 | orchestrator | + tenant_id = (known after apply) 2026-03-28 00:02:21.707073 | orchestrator | } 2026-03-28 00:02:21.707080 | orchestrator | 2026-03-28 00:02:21.707088 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-03-28 00:02:21.707101 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-03-28 00:02:21.707109 | orchestrator | + direction = "ingress" 2026-03-28 00:02:21.707117 | orchestrator | + ethertype = "IPv4" 2026-03-28 00:02:21.707125 | orchestrator | + id = (known after apply) 2026-03-28 00:02:21.707132 | orchestrator | + protocol = "icmp" 2026-03-28 00:02:21.707140 | orchestrator | + region = (known after apply) 2026-03-28 00:02:21.707148 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-28 00:02:21.707155 | orchestrator | + remote_group_id = (known after apply) 2026-03-28 00:02:21.707163 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-28 00:02:21.707171 | orchestrator | + security_group_id = (known after apply) 2026-03-28 00:02:21.707179 | orchestrator | + tenant_id = (known after apply) 2026-03-28 00:02:21.707187 | orchestrator | } 2026-03-28 00:02:21.707194 | orchestrator | 2026-03-28 00:02:21.707202 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-03-28 00:02:21.707210 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-03-28 00:02:21.707218 | orchestrator | + direction = "ingress" 2026-03-28 00:02:21.707226 | orchestrator | + ethertype = "IPv4" 2026-03-28 00:02:21.707233 | orchestrator | + id = (known after apply) 2026-03-28 00:02:21.707241 | orchestrator | + protocol = "tcp" 2026-03-28 00:02:21.707249 | orchestrator | + region = (known after apply) 2026-03-28 00:02:21.707256 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-28 00:02:21.707268 | orchestrator | + remote_group_id = (known after apply) 2026-03-28 00:02:21.707276 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-28 00:02:21.707284 | orchestrator | + security_group_id = (known after apply) 2026-03-28 00:02:21.707292 | orchestrator | + tenant_id = (known after apply) 2026-03-28 00:02:21.707300 | orchestrator | } 2026-03-28 00:02:21.707308 | orchestrator | 2026-03-28 00:02:21.707316 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-03-28 00:02:21.707324 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-03-28 00:02:21.707332 | orchestrator | + direction = "ingress" 2026-03-28 00:02:21.707339 | orchestrator | + ethertype = "IPv4" 2026-03-28 00:02:21.707347 | orchestrator | + id = (known after apply) 2026-03-28 00:02:21.707359 | orchestrator | + protocol = "udp" 2026-03-28 00:02:21.707406 | orchestrator | + region = (known after apply) 2026-03-28 00:02:21.707417 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-28 00:02:21.707425 | orchestrator | + remote_group_id = (known after apply) 2026-03-28 00:02:21.707433 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-28 00:02:21.707441 | orchestrator | + security_group_id = (known after apply) 2026-03-28 00:02:21.707448 | orchestrator | + tenant_id = (known after apply) 2026-03-28 00:02:21.707456 | orchestrator | } 2026-03-28 00:02:21.707464 | orchestrator | 2026-03-28 00:02:21.707471 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-03-28 00:02:21.707479 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-03-28 00:02:21.707487 | orchestrator | + direction = "ingress" 2026-03-28 00:02:21.707498 | orchestrator | + ethertype = "IPv4" 2026-03-28 00:02:21.707506 | orchestrator | + id = (known after apply) 2026-03-28 00:02:21.707514 | orchestrator | + protocol = "icmp" 2026-03-28 00:02:21.707522 | orchestrator | + region = (known after apply) 2026-03-28 00:02:21.707530 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-28 00:02:21.707538 | orchestrator | + remote_group_id = (known after apply) 2026-03-28 00:02:21.707546 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-28 00:02:21.707553 | orchestrator | + security_group_id = (known after apply) 2026-03-28 00:02:21.707561 | orchestrator | + tenant_id = (known after apply) 2026-03-28 00:02:21.707576 | orchestrator | } 2026-03-28 00:02:21.707584 | orchestrator | 2026-03-28 00:02:21.707591 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-03-28 00:02:21.707599 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-03-28 00:02:21.707606 | orchestrator | + description = "vrrp" 2026-03-28 00:02:21.707614 | orchestrator | + direction = "ingress" 2026-03-28 00:02:21.707622 | orchestrator | + ethertype = "IPv4" 2026-03-28 00:02:21.707630 | orchestrator | + id = (known after apply) 2026-03-28 00:02:21.707637 | orchestrator | + protocol = "112" 2026-03-28 00:02:21.707645 | orchestrator | + region = (known after apply) 2026-03-28 00:02:21.707653 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-28 00:02:21.707661 | orchestrator | + remote_group_id = (known after apply) 2026-03-28 00:02:21.707669 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-28 00:02:21.707676 | orchestrator | + security_group_id = (known after apply) 2026-03-28 00:02:21.707684 | orchestrator | + tenant_id = (known after apply) 2026-03-28 00:02:21.707692 | orchestrator | } 2026-03-28 00:02:21.707700 | orchestrator | 2026-03-28 00:02:21.707707 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-03-28 00:02:21.707715 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-03-28 00:02:21.707723 | orchestrator | + all_tags = (known after apply) 2026-03-28 00:02:21.707731 | orchestrator | + description = "management security group" 2026-03-28 00:02:21.707739 | orchestrator | + id = (known after apply) 2026-03-28 00:02:21.707747 | orchestrator | + name = "testbed-management" 2026-03-28 00:02:21.707754 | orchestrator | + region = (known after apply) 2026-03-28 00:02:21.707762 | orchestrator | + stateful = (known after apply) 2026-03-28 00:02:21.707770 | orchestrator | + tenant_id = (known after apply) 2026-03-28 00:02:21.707779 | orchestrator | } 2026-03-28 00:02:21.707787 | orchestrator | 2026-03-28 00:02:21.707794 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-03-28 00:02:21.707802 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-03-28 00:02:21.707809 | orchestrator | + all_tags = (known after apply) 2026-03-28 00:02:21.707817 | orchestrator | + description = "node security group" 2026-03-28 00:02:21.707825 | orchestrator | + id = (known after apply) 2026-03-28 00:02:21.707832 | orchestrator | + name = "testbed-node" 2026-03-28 00:02:21.707840 | orchestrator | + region = (known after apply) 2026-03-28 00:02:21.707848 | orchestrator | + stateful = (known after apply) 2026-03-28 00:02:21.707855 | orchestrator | + tenant_id = (known after apply) 2026-03-28 00:02:21.707863 | orchestrator | } 2026-03-28 00:02:21.707871 | orchestrator | 2026-03-28 00:02:21.707879 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-03-28 00:02:21.707886 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-03-28 00:02:21.707894 | orchestrator | + all_tags = (known after apply) 2026-03-28 00:02:21.707902 | orchestrator | + cidr = "192.168.16.0/20" 2026-03-28 00:02:21.707910 | orchestrator | + dns_nameservers = [ 2026-03-28 00:02:21.707918 | orchestrator | + "8.8.8.8", 2026-03-28 00:02:21.707926 | orchestrator | + "9.9.9.9", 2026-03-28 00:02:21.707934 | orchestrator | ] 2026-03-28 00:02:21.707941 | orchestrator | + enable_dhcp = true 2026-03-28 00:02:21.707949 | orchestrator | + gateway_ip = (known after apply) 2026-03-28 00:02:21.707957 | orchestrator | + id = (known after apply) 2026-03-28 00:02:21.707965 | orchestrator | + ip_version = 4 2026-03-28 00:02:21.707973 | orchestrator | + ipv6_address_mode = (known after apply) 2026-03-28 00:02:21.707980 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-03-28 00:02:21.707988 | orchestrator | + name = "subnet-testbed-management" 2026-03-28 00:02:21.707995 | orchestrator | + network_id = (known after apply) 2026-03-28 00:02:21.708003 | orchestrator | + no_gateway = false 2026-03-28 00:02:21.708011 | orchestrator | + region = (known after apply) 2026-03-28 00:02:21.708018 | orchestrator | + service_types = (known after apply) 2026-03-28 00:02:21.708032 | orchestrator | + tenant_id = (known after apply) 2026-03-28 00:02:21.708039 | orchestrator | 2026-03-28 00:02:21.708047 | orchestrator | + allocation_pool { 2026-03-28 00:02:21.708055 | orchestrator | + end = "192.168.31.250" 2026-03-28 00:02:21.708063 | orchestrator | + start = "192.168.31.200" 2026-03-28 00:02:21.708070 | orchestrator | } 2026-03-28 00:02:21.708078 | orchestrator | } 2026-03-28 00:02:21.708086 | orchestrator | 2026-03-28 00:02:21.708094 | orchestrator | # terraform_data.image will be created 2026-03-28 00:02:21.708102 | orchestrator | + resource "terraform_data" "image" { 2026-03-28 00:02:21.708109 | orchestrator | + id = (known after apply) 2026-03-28 00:02:21.708117 | orchestrator | + input = "Ubuntu 24.04" 2026-03-28 00:02:21.708124 | orchestrator | + output = (known after apply) 2026-03-28 00:02:21.708132 | orchestrator | } 2026-03-28 00:02:21.708140 | orchestrator | 2026-03-28 00:02:21.708147 | orchestrator | # terraform_data.image_node will be created 2026-03-28 00:02:21.708156 | orchestrator | + resource "terraform_data" "image_node" { 2026-03-28 00:02:21.708163 | orchestrator | + id = (known after apply) 2026-03-28 00:02:21.708171 | orchestrator | + input = "Ubuntu 24.04" 2026-03-28 00:02:21.708178 | orchestrator | + output = (known after apply) 2026-03-28 00:02:21.708186 | orchestrator | } 2026-03-28 00:02:21.708194 | orchestrator | 2026-03-28 00:02:21.708202 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-03-28 00:02:21.708209 | orchestrator | 2026-03-28 00:02:21.708217 | orchestrator | Changes to Outputs: 2026-03-28 00:02:21.708230 | orchestrator | + manager_address = (sensitive value) 2026-03-28 00:02:21.708238 | orchestrator | + private_key = (sensitive value) 2026-03-28 00:02:21.943324 | orchestrator | terraform_data.image: Creating... 2026-03-28 00:02:21.943434 | orchestrator | terraform_data.image_node: Creating... 2026-03-28 00:02:21.943447 | orchestrator | terraform_data.image: Creation complete after 0s [id=513a837d-bc19-65d9-0b1c-06d8f8e737d4] 2026-03-28 00:02:21.943465 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=6d1b3478-2d1b-51a4-9930-b360d5721176] 2026-03-28 00:02:21.960850 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-03-28 00:02:21.961473 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-03-28 00:02:21.972782 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-03-28 00:02:21.974945 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-03-28 00:02:21.988966 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-03-28 00:02:21.989471 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-03-28 00:02:21.990684 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-03-28 00:02:21.991069 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-03-28 00:02:21.991854 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-03-28 00:02:21.993631 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-03-28 00:02:22.475785 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-28 00:02:22.484217 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-03-28 00:02:22.505319 | orchestrator | data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-28 00:02:22.513996 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-03-28 00:02:22.759338 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2026-03-28 00:02:22.767541 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-03-28 00:02:23.222904 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=527862e5-7c33-481a-a273-09cd02ca3590] 2026-03-28 00:02:23.234198 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-03-28 00:02:25.872397 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=c165f4e4-c145-4cd5-8a4b-fe75c460abfb] 2026-03-28 00:02:25.878266 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=616f32f6-becb-4ce1-b615-c2a0fbaca869] 2026-03-28 00:02:25.879801 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-03-28 00:02:25.881002 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=78ac07d6-a998-431a-8632-f54c89645a8d] 2026-03-28 00:02:25.885732 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-03-28 00:02:25.886286 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-03-28 00:02:25.891608 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=479351df-b417-42ac-b9cb-d6683c731815] 2026-03-28 00:02:25.892825 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=af575ecf-0cf6-48aa-a1b6-43f16240ccad] 2026-03-28 00:02:25.898979 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-03-28 00:02:25.899666 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-03-28 00:02:25.907304 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=d2c41d1e-c1aa-422a-bc56-ab0bbd118726] 2026-03-28 00:02:25.909076 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=0a0aea56-4050-4691-823a-d862fa48a59f] 2026-03-28 00:02:25.911166 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=3670b387-e30b-4544-bca5-74e83387707d] 2026-03-28 00:02:25.916491 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-03-28 00:02:25.924720 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-03-28 00:02:25.929579 | orchestrator | local_file.id_rsa_pub: Creating... 2026-03-28 00:02:25.934940 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=e820fd81a7bf10a027f0f0afe76f70b77f3c4962] 2026-03-28 00:02:25.936512 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=21fa5b0fbf53ef690aec9b0500ffedfd88cea0d8] 2026-03-28 00:02:25.939610 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-03-28 00:02:26.077644 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=edfefcfb-f0d2-43d0-b5b0-353b223cd811] 2026-03-28 00:02:26.680704 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=88a2f0c9-b73b-426a-b81f-312e09d7fc82] 2026-03-28 00:02:27.054675 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=38a7f7b5-ed97-4584-b7c9-38de4120b97c] 2026-03-28 00:02:27.064189 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-03-28 00:02:29.359591 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=9304b03c-54d0-4df2-b114-2d3d3345c945] 2026-03-28 00:02:29.359786 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=eb8cdf5a-61ca-4829-8f5a-ada391b02d40] 2026-03-28 00:02:29.415407 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=c7e5dcb9-1092-43de-8534-38467587340e] 2026-03-28 00:02:29.433970 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=11ea0833-972a-4340-8844-482a94e1775f] 2026-03-28 00:02:29.456630 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=3c51dbd4-3dd9-4220-b480-983204e78537] 2026-03-28 00:02:29.477461 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=5eeae212-4745-43bc-8401-ef4bd4af3314] 2026-03-28 00:02:31.117145 | orchestrator | openstack_networking_router_v2.router: Creation complete after 4s [id=57000ac7-98a2-40aa-9739-9f0b3903ff5d] 2026-03-28 00:02:31.124774 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-03-28 00:02:31.125582 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-03-28 00:02:31.125801 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-03-28 00:02:31.952176 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 1s [id=e1b1bd9b-8e74-4a48-a49a-ceeb84bc8275] 2026-03-28 00:02:31.972634 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-03-28 00:02:31.974856 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-03-28 00:02:31.975634 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-03-28 00:02:31.975762 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-03-28 00:02:31.975973 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-03-28 00:02:31.976305 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-03-28 00:02:31.981063 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-03-28 00:02:31.981733 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-03-28 00:02:32.160429 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 1s [id=9d49daf3-87b6-4ea0-9a11-23ca78635836] 2026-03-28 00:02:32.170316 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-03-28 00:02:32.256458 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=4c84f31f-9e07-4303-8aa5-3ad7f30a1f8b] 2026-03-28 00:02:32.269055 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-03-28 00:02:33.163960 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=247ef05b-ebf7-4b73-bd8d-b302c4227a84] 2026-03-28 00:02:33.170070 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-03-28 00:02:33.274577 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=d2981187-0f6a-4621-8927-8aae508b467f] 2026-03-28 00:02:33.288327 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-03-28 00:02:33.429091 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=59730a51-425a-406d-8e81-85dfb5fbdcb0] 2026-03-28 00:02:33.440763 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-03-28 00:02:33.507174 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 2s [id=6ef56f71-0506-4edb-babc-dd9b7106c4ce] 2026-03-28 00:02:33.512951 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-03-28 00:02:33.516369 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 2s [id=243a912b-60ce-47a8-8efe-4b26f5c3925b] 2026-03-28 00:02:33.522205 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-03-28 00:02:33.730559 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 2s [id=e6391af9-60e0-4ddd-9db2-a28ae9b090cd] 2026-03-28 00:02:33.737583 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-03-28 00:02:33.863288 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 2s [id=7497d248-9897-449c-889b-ce40aefb6059] 2026-03-28 00:02:33.915819 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=887afeba-cb4a-4926-b5a1-5cef64993f96] 2026-03-28 00:02:33.942474 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 2s [id=8c3535d3-d852-4a39-b365-08c50b364a7b] 2026-03-28 00:02:33.967303 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=5d68bc86-0361-4016-b3b2-c942163fe1ae] 2026-03-28 00:02:34.206674 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=8e2ba7cc-9b77-4277-81a6-e005696c0829] 2026-03-28 00:02:34.399092 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=e2d84497-5947-45c1-b25b-8ad7b0ca9dd1] 2026-03-28 00:02:34.560657 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 3s [id=46d8b22f-ed1f-4ee4-87ae-3dd3cbe95fab] 2026-03-28 00:02:34.699473 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 3s [id=0b983fdc-a2ec-40f4-aebd-85d138c88b8e] 2026-03-28 00:02:34.790670 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=fda60524-add9-41c3-9f8e-c41b83444393] 2026-03-28 00:02:35.738083 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 5s [id=d1c26f62-9c0a-4cd8-87d2-c7f7d855d709] 2026-03-28 00:02:35.764947 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-03-28 00:02:35.766178 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-03-28 00:02:35.770143 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-03-28 00:02:35.770501 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-03-28 00:02:35.796306 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-03-28 00:02:35.807433 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-03-28 00:02:35.817015 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-03-28 00:02:38.300130 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=95f5d68e-d89f-4565-9d5d-85d86bcaf55e] 2026-03-28 00:02:38.312524 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-03-28 00:02:38.313818 | orchestrator | local_file.inventory: Creating... 2026-03-28 00:02:38.314603 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-03-28 00:02:38.320603 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=04f3b2225aac618f70d9843f2ce064cdcbed1433] 2026-03-28 00:02:38.322847 | orchestrator | local_file.inventory: Creation complete after 0s [id=de616f10ed7534f9aabf0516685f1a0fe4178f67] 2026-03-28 00:02:39.965714 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 2s [id=95f5d68e-d89f-4565-9d5d-85d86bcaf55e] 2026-03-28 00:02:45.769817 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-03-28 00:02:45.772106 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-03-28 00:02:45.775966 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-03-28 00:02:45.797802 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-03-28 00:02:45.810238 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-03-28 00:02:45.817444 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-03-28 00:02:55.770185 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-03-28 00:02:55.772339 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-03-28 00:02:55.776680 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-03-28 00:02:55.797932 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-03-28 00:02:55.811289 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-03-28 00:02:55.817560 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-03-28 00:03:05.777792 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-03-28 00:03:05.777932 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-03-28 00:03:05.777963 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-03-28 00:03:05.799156 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-03-28 00:03:05.811346 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-03-28 00:03:05.818593 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-03-28 00:03:15.785140 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [40s elapsed] 2026-03-28 00:03:15.785235 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [40s elapsed] 2026-03-28 00:03:15.785244 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [40s elapsed] 2026-03-28 00:03:15.799459 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [40s elapsed] 2026-03-28 00:03:15.811949 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [40s elapsed] 2026-03-28 00:03:15.819179 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [40s elapsed] 2026-03-28 00:03:17.208597 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 41s [id=42f67e59-9cc7-4ba8-83d1-61ef5fbaecdf] 2026-03-28 00:03:17.334078 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 41s [id=ed09c7e0-f455-47f3-9855-954aaf9ea827] 2026-03-28 00:03:25.785646 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [50s elapsed] 2026-03-28 00:03:25.785753 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [50s elapsed] 2026-03-28 00:03:25.813113 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [50s elapsed] 2026-03-28 00:03:25.819722 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [50s elapsed] 2026-03-28 00:03:26.604032 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 51s [id=f18fcdbe-1ed8-43cd-9303-5456cf358cd6] 2026-03-28 00:03:27.158707 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 51s [id=785df76f-2e53-4811-81df-037448056317] 2026-03-28 00:03:27.246421 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 51s [id=060eb738-d1a4-4076-87b8-3b3dadfa3170] 2026-03-28 00:03:35.813380 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [1m0s elapsed] 2026-03-28 00:03:37.065618 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 1m1s [id=6bf0edd8-ae9f-4b47-992a-5ef6e9ffb800] 2026-03-28 00:03:37.089499 | orchestrator | null_resource.node_semaphore: Creating... 2026-03-28 00:03:37.092847 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=571782039306938096] 2026-03-28 00:03:37.099584 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-03-28 00:03:37.104613 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-03-28 00:03:37.108598 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-03-28 00:03:37.116892 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-03-28 00:03:37.117847 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-03-28 00:03:37.124810 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-03-28 00:03:37.139043 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-03-28 00:03:37.171318 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-03-28 00:03:37.178843 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-03-28 00:03:37.195421 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-03-28 00:03:40.490170 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 3s [id=6bf0edd8-ae9f-4b47-992a-5ef6e9ffb800/edfefcfb-f0d2-43d0-b5b0-353b223cd811] 2026-03-28 00:03:40.515433 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 4s [id=42f67e59-9cc7-4ba8-83d1-61ef5fbaecdf/479351df-b417-42ac-b9cb-d6683c731815] 2026-03-28 00:03:40.520115 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 4s [id=060eb738-d1a4-4076-87b8-3b3dadfa3170/d2c41d1e-c1aa-422a-bc56-ab0bbd118726] 2026-03-28 00:03:40.550007 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 4s [id=42f67e59-9cc7-4ba8-83d1-61ef5fbaecdf/3670b387-e30b-4544-bca5-74e83387707d] 2026-03-28 00:03:40.557814 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 4s [id=6bf0edd8-ae9f-4b47-992a-5ef6e9ffb800/c165f4e4-c145-4cd5-8a4b-fe75c460abfb] 2026-03-28 00:03:40.585991 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 4s [id=060eb738-d1a4-4076-87b8-3b3dadfa3170/af575ecf-0cf6-48aa-a1b6-43f16240ccad] 2026-03-28 00:03:46.670539 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 10s [id=42f67e59-9cc7-4ba8-83d1-61ef5fbaecdf/616f32f6-becb-4ce1-b615-c2a0fbaca869] 2026-03-28 00:03:46.676941 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 10s [id=6bf0edd8-ae9f-4b47-992a-5ef6e9ffb800/0a0aea56-4050-4691-823a-d862fa48a59f] 2026-03-28 00:03:46.695942 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 10s [id=060eb738-d1a4-4076-87b8-3b3dadfa3170/78ac07d6-a998-431a-8632-f54c89645a8d] 2026-03-28 00:03:47.195905 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-03-28 00:03:57.203069 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-03-28 00:03:57.663332 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=6a43a033-fbff-4116-9895-d55791777ad9] 2026-03-28 00:04:00.288279 | orchestrator | 2026-03-28 00:04:00.288374 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-03-28 00:04:00.288393 | orchestrator | 2026-03-28 00:04:00.288406 | orchestrator | Outputs: 2026-03-28 00:04:00.288420 | orchestrator | 2026-03-28 00:04:00.288432 | orchestrator | manager_address = 2026-03-28 00:04:00.288445 | orchestrator | private_key = 2026-03-28 00:04:00.590951 | orchestrator | ok: Runtime: 0:01:51.556897 2026-03-28 00:04:00.630277 | 2026-03-28 00:04:00.630486 | TASK [Create infrastructure (stable)] 2026-03-28 00:04:01.167151 | orchestrator | skipping: Conditional result was False 2026-03-28 00:04:01.189316 | 2026-03-28 00:04:01.189517 | TASK [Fetch manager address] 2026-03-28 00:04:01.658903 | orchestrator | ok 2026-03-28 00:04:01.669488 | 2026-03-28 00:04:01.669654 | TASK [Set manager_host address] 2026-03-28 00:04:01.750032 | orchestrator | ok 2026-03-28 00:04:01.759349 | 2026-03-28 00:04:01.759508 | LOOP [Update ansible collections] 2026-03-28 00:04:02.770660 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-28 00:04:02.771070 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-28 00:04:02.771137 | orchestrator | Starting galaxy collection install process 2026-03-28 00:04:02.771181 | orchestrator | Process install dependency map 2026-03-28 00:04:02.771223 | orchestrator | Starting collection install process 2026-03-28 00:04:02.771260 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons' 2026-03-28 00:04:02.771382 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons 2026-03-28 00:04:02.771623 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-03-28 00:04:02.771807 | orchestrator | ok: Item: commons Runtime: 0:00:00.662438 2026-03-28 00:04:03.639993 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-28 00:04:03.640410 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-28 00:04:03.640535 | orchestrator | Starting galaxy collection install process 2026-03-28 00:04:03.640576 | orchestrator | Process install dependency map 2026-03-28 00:04:03.640613 | orchestrator | Starting collection install process 2026-03-28 00:04:03.640647 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed04/.ansible/collections/ansible_collections/osism/services' 2026-03-28 00:04:03.640680 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/services 2026-03-28 00:04:03.640712 | orchestrator | osism.services:999.0.0 was installed successfully 2026-03-28 00:04:03.640766 | orchestrator | ok: Item: services Runtime: 0:00:00.609401 2026-03-28 00:04:03.657486 | 2026-03-28 00:04:03.657654 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-28 00:04:14.252165 | orchestrator | ok 2026-03-28 00:04:14.261153 | 2026-03-28 00:04:14.261269 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-28 00:05:14.309551 | orchestrator | ok 2026-03-28 00:05:14.319028 | 2026-03-28 00:05:14.319161 | TASK [Fetch manager ssh hostkey] 2026-03-28 00:05:15.899708 | orchestrator | Output suppressed because no_log was given 2026-03-28 00:05:15.914792 | 2026-03-28 00:05:15.915014 | TASK [Get ssh keypair from terraform environment] 2026-03-28 00:05:16.451831 | orchestrator | ok: Runtime: 0:00:00.008917 2026-03-28 00:05:16.467469 | 2026-03-28 00:05:16.467700 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-28 00:05:16.517632 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-03-28 00:05:16.527898 | 2026-03-28 00:05:16.528038 | TASK [Run manager part 0] 2026-03-28 00:05:17.504462 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-28 00:05:17.568225 | orchestrator | 2026-03-28 00:05:17.568282 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-03-28 00:05:17.568291 | orchestrator | 2026-03-28 00:05:17.568306 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-03-28 00:05:19.801127 | orchestrator | ok: [testbed-manager] 2026-03-28 00:05:19.801189 | orchestrator | 2026-03-28 00:05:19.801213 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-28 00:05:19.801222 | orchestrator | 2026-03-28 00:05:19.801232 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-28 00:05:21.988957 | orchestrator | ok: [testbed-manager] 2026-03-28 00:05:21.989304 | orchestrator | 2026-03-28 00:05:21.989335 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-28 00:05:22.862122 | orchestrator | ok: [testbed-manager] 2026-03-28 00:05:22.862261 | orchestrator | 2026-03-28 00:05:22.862275 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-28 00:05:22.911810 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:05:22.911864 | orchestrator | 2026-03-28 00:05:22.911874 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-03-28 00:05:22.949104 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:05:22.949178 | orchestrator | 2026-03-28 00:05:22.949191 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-03-28 00:05:22.981196 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:05:22.981245 | orchestrator | 2026-03-28 00:05:22.981250 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-03-28 00:05:23.810531 | orchestrator | changed: [testbed-manager] 2026-03-28 00:05:23.810646 | orchestrator | 2026-03-28 00:05:23.810669 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-03-28 00:08:29.751463 | orchestrator | changed: [testbed-manager] 2026-03-28 00:08:29.751536 | orchestrator | 2026-03-28 00:08:29.751547 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-28 00:09:50.511986 | orchestrator | changed: [testbed-manager] 2026-03-28 00:09:50.512078 | orchestrator | 2026-03-28 00:09:50.512100 | orchestrator | TASK [Install required packages] *********************************************** 2026-03-28 00:10:14.837681 | orchestrator | changed: [testbed-manager] 2026-03-28 00:10:14.837764 | orchestrator | 2026-03-28 00:10:14.837779 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-03-28 00:10:24.598103 | orchestrator | changed: [testbed-manager] 2026-03-28 00:10:24.598145 | orchestrator | 2026-03-28 00:10:24.598273 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-28 00:10:24.640554 | orchestrator | ok: [testbed-manager] 2026-03-28 00:10:24.640595 | orchestrator | 2026-03-28 00:10:24.640605 | orchestrator | TASK [Get current user] ******************************************************** 2026-03-28 00:10:25.472461 | orchestrator | ok: [testbed-manager] 2026-03-28 00:10:25.472505 | orchestrator | 2026-03-28 00:10:25.472515 | orchestrator | TASK [Create venv directory] *************************************************** 2026-03-28 00:10:26.210931 | orchestrator | changed: [testbed-manager] 2026-03-28 00:10:26.210977 | orchestrator | 2026-03-28 00:10:26.210989 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-03-28 00:10:33.217257 | orchestrator | changed: [testbed-manager] 2026-03-28 00:10:33.217452 | orchestrator | 2026-03-28 00:10:33.217475 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-03-28 00:10:39.318486 | orchestrator | changed: [testbed-manager] 2026-03-28 00:10:39.318583 | orchestrator | 2026-03-28 00:10:39.318603 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-03-28 00:10:42.072887 | orchestrator | changed: [testbed-manager] 2026-03-28 00:10:42.072978 | orchestrator | 2026-03-28 00:10:42.072994 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-03-28 00:10:43.924888 | orchestrator | changed: [testbed-manager] 2026-03-28 00:10:43.924928 | orchestrator | 2026-03-28 00:10:43.924937 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-03-28 00:10:45.091575 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-28 00:10:45.091711 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-28 00:10:45.091731 | orchestrator | 2026-03-28 00:10:45.091747 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-03-28 00:10:45.138780 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-28 00:10:45.138843 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-28 00:10:45.138852 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-28 00:10:45.138862 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-28 00:10:48.526792 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-28 00:10:48.526887 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-28 00:10:48.526902 | orchestrator | 2026-03-28 00:10:48.526915 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-03-28 00:10:49.111306 | orchestrator | changed: [testbed-manager] 2026-03-28 00:10:49.111417 | orchestrator | 2026-03-28 00:10:49.111445 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-03-28 00:11:12.576351 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-03-28 00:11:12.576390 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-03-28 00:11:12.576396 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-03-28 00:11:12.576400 | orchestrator | 2026-03-28 00:11:12.576405 | orchestrator | TASK [Install local collections] *********************************************** 2026-03-28 00:11:14.980693 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-03-28 00:11:14.980776 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-03-28 00:11:14.980792 | orchestrator | 2026-03-28 00:11:14.980808 | orchestrator | PLAY [Create operator user] **************************************************** 2026-03-28 00:11:14.980820 | orchestrator | 2026-03-28 00:11:14.980832 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-28 00:11:16.518762 | orchestrator | ok: [testbed-manager] 2026-03-28 00:11:16.518809 | orchestrator | 2026-03-28 00:11:16.518819 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-28 00:11:16.562306 | orchestrator | ok: [testbed-manager] 2026-03-28 00:11:16.562363 | orchestrator | 2026-03-28 00:11:16.562369 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-28 00:11:16.632730 | orchestrator | ok: [testbed-manager] 2026-03-28 00:11:16.632812 | orchestrator | 2026-03-28 00:11:16.632822 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-28 00:11:17.453222 | orchestrator | changed: [testbed-manager] 2026-03-28 00:11:17.453310 | orchestrator | 2026-03-28 00:11:17.453330 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-28 00:11:18.227342 | orchestrator | changed: [testbed-manager] 2026-03-28 00:11:18.227432 | orchestrator | 2026-03-28 00:11:18.227450 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-28 00:11:19.702373 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-03-28 00:11:19.702485 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-03-28 00:11:19.702503 | orchestrator | 2026-03-28 00:11:19.702516 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-28 00:11:21.067461 | orchestrator | changed: [testbed-manager] 2026-03-28 00:11:21.067537 | orchestrator | 2026-03-28 00:11:21.067553 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-28 00:11:22.775312 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-03-28 00:11:22.775389 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-03-28 00:11:22.775416 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-03-28 00:11:22.775429 | orchestrator | 2026-03-28 00:11:22.775442 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-28 00:11:22.830795 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:11:22.830831 | orchestrator | 2026-03-28 00:11:22.830838 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-28 00:11:22.890133 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:11:22.890294 | orchestrator | 2026-03-28 00:11:22.890314 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-28 00:11:23.430234 | orchestrator | changed: [testbed-manager] 2026-03-28 00:11:23.430307 | orchestrator | 2026-03-28 00:11:23.430323 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-28 00:11:23.492211 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:11:23.492248 | orchestrator | 2026-03-28 00:11:23.492253 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-28 00:11:24.431472 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-28 00:11:24.431591 | orchestrator | changed: [testbed-manager] 2026-03-28 00:11:24.431612 | orchestrator | 2026-03-28 00:11:24.431625 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-28 00:11:24.462416 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:11:24.462462 | orchestrator | 2026-03-28 00:11:24.462473 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-28 00:11:24.493250 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:11:24.493319 | orchestrator | 2026-03-28 00:11:24.493334 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-28 00:11:24.527179 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:11:24.527258 | orchestrator | 2026-03-28 00:11:24.527282 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-28 00:11:24.592773 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:11:24.592849 | orchestrator | 2026-03-28 00:11:24.592866 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-28 00:11:25.273769 | orchestrator | ok: [testbed-manager] 2026-03-28 00:11:25.273844 | orchestrator | 2026-03-28 00:11:25.273862 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-28 00:11:25.273875 | orchestrator | 2026-03-28 00:11:25.273888 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-28 00:11:26.676437 | orchestrator | ok: [testbed-manager] 2026-03-28 00:11:26.676491 | orchestrator | 2026-03-28 00:11:26.676497 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-03-28 00:11:27.630736 | orchestrator | changed: [testbed-manager] 2026-03-28 00:11:27.630834 | orchestrator | 2026-03-28 00:11:27.630851 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:11:27.630865 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=10 rescued=0 ignored=0 2026-03-28 00:11:27.630876 | orchestrator | 2026-03-28 00:11:27.784580 | orchestrator | ok: Runtime: 0:06:10.877481 2026-03-28 00:11:27.796199 | 2026-03-28 00:11:27.796313 | TASK [Point out that the log in on the manager is now possible] 2026-03-28 00:11:27.841081 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-03-28 00:11:27.850778 | 2026-03-28 00:11:27.850918 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-28 00:11:27.884511 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-03-28 00:11:27.892657 | 2026-03-28 00:11:27.892768 | TASK [Run manager part 1 + 2] 2026-03-28 00:11:28.717571 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-28 00:11:28.772570 | orchestrator | 2026-03-28 00:11:28.772622 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-03-28 00:11:28.772629 | orchestrator | 2026-03-28 00:11:28.772642 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-28 00:11:31.401882 | orchestrator | ok: [testbed-manager] 2026-03-28 00:11:31.401940 | orchestrator | 2026-03-28 00:11:31.401963 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-03-28 00:11:31.438148 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:11:31.438194 | orchestrator | 2026-03-28 00:11:31.438202 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-28 00:11:31.472412 | orchestrator | ok: [testbed-manager] 2026-03-28 00:11:31.472458 | orchestrator | 2026-03-28 00:11:31.472465 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-28 00:11:31.526499 | orchestrator | ok: [testbed-manager] 2026-03-28 00:11:31.526613 | orchestrator | 2026-03-28 00:11:31.526630 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-28 00:11:31.598555 | orchestrator | ok: [testbed-manager] 2026-03-28 00:11:31.598649 | orchestrator | 2026-03-28 00:11:31.598669 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-28 00:11:31.662490 | orchestrator | ok: [testbed-manager] 2026-03-28 00:11:31.662565 | orchestrator | 2026-03-28 00:11:31.662573 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-28 00:11:31.702183 | orchestrator | included: /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-03-28 00:11:31.702270 | orchestrator | 2026-03-28 00:11:31.702286 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-28 00:11:32.478782 | orchestrator | ok: [testbed-manager] 2026-03-28 00:11:32.478872 | orchestrator | 2026-03-28 00:11:32.478891 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-28 00:11:32.525297 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:11:32.525389 | orchestrator | 2026-03-28 00:11:32.525417 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-28 00:11:33.956904 | orchestrator | changed: [testbed-manager] 2026-03-28 00:11:33.957001 | orchestrator | 2026-03-28 00:11:33.957021 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-28 00:11:34.614132 | orchestrator | ok: [testbed-manager] 2026-03-28 00:11:34.614202 | orchestrator | 2026-03-28 00:11:34.614213 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-28 00:11:35.803289 | orchestrator | changed: [testbed-manager] 2026-03-28 00:11:35.803349 | orchestrator | 2026-03-28 00:11:35.803359 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-28 00:11:53.677984 | orchestrator | changed: [testbed-manager] 2026-03-28 00:11:53.678131 | orchestrator | 2026-03-28 00:11:53.678148 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-28 00:11:54.407529 | orchestrator | ok: [testbed-manager] 2026-03-28 00:11:54.407623 | orchestrator | 2026-03-28 00:11:54.407641 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-28 00:11:54.468233 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:11:54.468297 | orchestrator | 2026-03-28 00:11:54.468307 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-03-28 00:11:55.483422 | orchestrator | changed: [testbed-manager] 2026-03-28 00:11:55.484275 | orchestrator | 2026-03-28 00:11:55.484346 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-03-28 00:11:56.471196 | orchestrator | changed: [testbed-manager] 2026-03-28 00:11:56.471286 | orchestrator | 2026-03-28 00:11:56.471302 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-03-28 00:11:57.082501 | orchestrator | changed: [testbed-manager] 2026-03-28 00:11:57.082585 | orchestrator | 2026-03-28 00:11:57.082601 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-03-28 00:11:57.125864 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-28 00:11:57.125983 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-28 00:11:57.125998 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-28 00:11:57.126009 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-28 00:11:59.819158 | orchestrator | changed: [testbed-manager] 2026-03-28 00:11:59.819244 | orchestrator | 2026-03-28 00:11:59.819258 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-03-28 00:12:09.573194 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-03-28 00:12:09.573243 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-03-28 00:12:09.573254 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-03-28 00:12:09.573261 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-03-28 00:12:09.573272 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-03-28 00:12:09.573279 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-03-28 00:12:09.573286 | orchestrator | 2026-03-28 00:12:09.573293 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-03-28 00:12:10.669489 | orchestrator | changed: [testbed-manager] 2026-03-28 00:12:10.669533 | orchestrator | 2026-03-28 00:12:10.669541 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-03-28 00:12:13.976909 | orchestrator | changed: [testbed-manager] 2026-03-28 00:12:13.977000 | orchestrator | 2026-03-28 00:12:13.977016 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-03-28 00:12:14.022719 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:12:14.022819 | orchestrator | 2026-03-28 00:12:14.022842 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-03-28 00:14:02.956069 | orchestrator | changed: [testbed-manager] 2026-03-28 00:14:02.956236 | orchestrator | 2026-03-28 00:14:02.956259 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-28 00:14:04.190422 | orchestrator | ok: [testbed-manager] 2026-03-28 00:14:04.191372 | orchestrator | 2026-03-28 00:14:04.191420 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:14:04.191443 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0 2026-03-28 00:14:04.191463 | orchestrator | 2026-03-28 00:14:04.545300 | orchestrator | ok: Runtime: 0:02:36.104973 2026-03-28 00:14:04.563415 | 2026-03-28 00:14:04.563577 | TASK [Reboot manager] 2026-03-28 00:14:06.103722 | orchestrator | ok: Runtime: 0:00:00.978919 2026-03-28 00:14:06.119316 | 2026-03-28 00:14:06.119463 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-28 00:14:22.531060 | orchestrator | ok 2026-03-28 00:14:22.541312 | 2026-03-28 00:14:22.541442 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-28 00:15:22.588296 | orchestrator | ok 2026-03-28 00:15:22.598446 | 2026-03-28 00:15:22.598569 | TASK [Deploy manager + bootstrap nodes] 2026-03-28 00:15:25.258545 | orchestrator | 2026-03-28 00:15:25.258760 | orchestrator | # DEPLOY MANAGER 2026-03-28 00:15:25.258786 | orchestrator | 2026-03-28 00:15:25.258801 | orchestrator | + set -e 2026-03-28 00:15:25.258814 | orchestrator | + echo 2026-03-28 00:15:25.258828 | orchestrator | + echo '# DEPLOY MANAGER' 2026-03-28 00:15:25.258845 | orchestrator | + echo 2026-03-28 00:15:25.258894 | orchestrator | + cat /opt/manager-vars.sh 2026-03-28 00:15:25.262774 | orchestrator | export NUMBER_OF_NODES=6 2026-03-28 00:15:25.262822 | orchestrator | 2026-03-28 00:15:25.262844 | orchestrator | export CEPH_VERSION=reef 2026-03-28 00:15:25.262865 | orchestrator | export CONFIGURATION_VERSION=main 2026-03-28 00:15:25.262878 | orchestrator | export MANAGER_VERSION=latest 2026-03-28 00:15:25.262921 | orchestrator | export OPENSTACK_VERSION=2025.1 2026-03-28 00:15:25.262932 | orchestrator | 2026-03-28 00:15:25.263002 | orchestrator | export ARA=false 2026-03-28 00:15:25.263016 | orchestrator | export DEPLOY_MODE=manager 2026-03-28 00:15:25.263034 | orchestrator | export TEMPEST=true 2026-03-28 00:15:25.263046 | orchestrator | export IS_ZUUL=true 2026-03-28 00:15:25.263056 | orchestrator | 2026-03-28 00:15:25.263075 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.109 2026-03-28 00:15:25.263086 | orchestrator | export EXTERNAL_API=false 2026-03-28 00:15:25.263097 | orchestrator | 2026-03-28 00:15:25.263107 | orchestrator | export IMAGE_USER=ubuntu 2026-03-28 00:15:25.263121 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-03-28 00:15:25.263132 | orchestrator | 2026-03-28 00:15:25.263143 | orchestrator | export CEPH_STACK=ceph-ansible 2026-03-28 00:15:25.263161 | orchestrator | 2026-03-28 00:15:25.263172 | orchestrator | + echo 2026-03-28 00:15:25.263185 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-28 00:15:25.263879 | orchestrator | ++ export INTERACTIVE=false 2026-03-28 00:15:25.263904 | orchestrator | ++ INTERACTIVE=false 2026-03-28 00:15:25.263918 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-28 00:15:25.263932 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-28 00:15:25.264233 | orchestrator | + source /opt/manager-vars.sh 2026-03-28 00:15:25.264339 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-28 00:15:25.264372 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-28 00:15:25.264384 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-28 00:15:25.264393 | orchestrator | ++ CEPH_VERSION=reef 2026-03-28 00:15:25.264403 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-28 00:15:25.264414 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-28 00:15:25.264425 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-28 00:15:25.264434 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-28 00:15:25.264444 | orchestrator | ++ export OPENSTACK_VERSION=2025.1 2026-03-28 00:15:25.264470 | orchestrator | ++ OPENSTACK_VERSION=2025.1 2026-03-28 00:15:25.264497 | orchestrator | ++ export ARA=false 2026-03-28 00:15:25.264508 | orchestrator | ++ ARA=false 2026-03-28 00:15:25.264518 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-28 00:15:25.264527 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-28 00:15:25.264537 | orchestrator | ++ export TEMPEST=true 2026-03-28 00:15:25.264546 | orchestrator | ++ TEMPEST=true 2026-03-28 00:15:25.264556 | orchestrator | ++ export IS_ZUUL=true 2026-03-28 00:15:25.264566 | orchestrator | ++ IS_ZUUL=true 2026-03-28 00:15:25.264584 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.109 2026-03-28 00:15:25.264594 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.109 2026-03-28 00:15:25.264604 | orchestrator | ++ export EXTERNAL_API=false 2026-03-28 00:15:25.264613 | orchestrator | ++ EXTERNAL_API=false 2026-03-28 00:15:25.264623 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-28 00:15:25.264632 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-28 00:15:25.264645 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-28 00:15:25.264655 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-28 00:15:25.264667 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-28 00:15:25.264677 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-28 00:15:25.264768 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-03-28 00:15:25.326977 | orchestrator | + docker version 2026-03-28 00:15:25.428244 | orchestrator | Client: Docker Engine - Community 2026-03-28 00:15:25.428349 | orchestrator | Version: 27.5.1 2026-03-28 00:15:25.428365 | orchestrator | API version: 1.47 2026-03-28 00:15:25.428379 | orchestrator | Go version: go1.22.11 2026-03-28 00:15:25.428389 | orchestrator | Git commit: 9f9e405 2026-03-28 00:15:25.428400 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-28 00:15:25.428412 | orchestrator | OS/Arch: linux/amd64 2026-03-28 00:15:25.428423 | orchestrator | Context: default 2026-03-28 00:15:25.428434 | orchestrator | 2026-03-28 00:15:25.428445 | orchestrator | Server: Docker Engine - Community 2026-03-28 00:15:25.428456 | orchestrator | Engine: 2026-03-28 00:15:25.428467 | orchestrator | Version: 27.5.1 2026-03-28 00:15:25.428479 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-03-28 00:15:25.428520 | orchestrator | Go version: go1.22.11 2026-03-28 00:15:25.428531 | orchestrator | Git commit: 4c9b3b0 2026-03-28 00:15:25.428542 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-28 00:15:25.428553 | orchestrator | OS/Arch: linux/amd64 2026-03-28 00:15:25.428564 | orchestrator | Experimental: false 2026-03-28 00:15:25.428575 | orchestrator | containerd: 2026-03-28 00:15:25.428585 | orchestrator | Version: v2.2.2 2026-03-28 00:15:25.428597 | orchestrator | GitCommit: 301b2dac98f15c27117da5c8af12118a041a31d9 2026-03-28 00:15:25.428608 | orchestrator | runc: 2026-03-28 00:15:25.428619 | orchestrator | Version: 1.3.4 2026-03-28 00:15:25.428630 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-03-28 00:15:25.428641 | orchestrator | docker-init: 2026-03-28 00:15:25.428651 | orchestrator | Version: 0.19.0 2026-03-28 00:15:25.428663 | orchestrator | GitCommit: de40ad0 2026-03-28 00:15:25.431879 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-03-28 00:15:25.441329 | orchestrator | + set -e 2026-03-28 00:15:25.441439 | orchestrator | + source /opt/manager-vars.sh 2026-03-28 00:15:25.441456 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-28 00:15:25.441471 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-28 00:15:25.441482 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-28 00:15:25.441492 | orchestrator | ++ CEPH_VERSION=reef 2026-03-28 00:15:25.441503 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-28 00:15:25.441515 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-28 00:15:25.441526 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-28 00:15:25.441537 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-28 00:15:25.441548 | orchestrator | ++ export OPENSTACK_VERSION=2025.1 2026-03-28 00:15:25.441559 | orchestrator | ++ OPENSTACK_VERSION=2025.1 2026-03-28 00:15:25.441569 | orchestrator | ++ export ARA=false 2026-03-28 00:15:25.441580 | orchestrator | ++ ARA=false 2026-03-28 00:15:25.441591 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-28 00:15:25.441603 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-28 00:15:25.441613 | orchestrator | ++ export TEMPEST=true 2026-03-28 00:15:25.441624 | orchestrator | ++ TEMPEST=true 2026-03-28 00:15:25.441635 | orchestrator | ++ export IS_ZUUL=true 2026-03-28 00:15:25.441646 | orchestrator | ++ IS_ZUUL=true 2026-03-28 00:15:25.441656 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.109 2026-03-28 00:15:25.441668 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.109 2026-03-28 00:15:25.441678 | orchestrator | ++ export EXTERNAL_API=false 2026-03-28 00:15:25.441689 | orchestrator | ++ EXTERNAL_API=false 2026-03-28 00:15:25.441700 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-28 00:15:25.441710 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-28 00:15:25.441721 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-28 00:15:25.441732 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-28 00:15:25.441743 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-28 00:15:25.441754 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-28 00:15:25.441765 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-28 00:15:25.441775 | orchestrator | ++ export INTERACTIVE=false 2026-03-28 00:15:25.441786 | orchestrator | ++ INTERACTIVE=false 2026-03-28 00:15:25.441797 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-28 00:15:25.441811 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-28 00:15:25.441833 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-28 00:15:25.441844 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-28 00:15:25.441855 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2026-03-28 00:15:25.449769 | orchestrator | + set -e 2026-03-28 00:15:25.449875 | orchestrator | + VERSION=reef 2026-03-28 00:15:25.450852 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2026-03-28 00:15:25.457868 | orchestrator | + [[ -n ceph_version: reef ]] 2026-03-28 00:15:25.457979 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2026-03-28 00:15:25.464075 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2025.1 2026-03-28 00:15:25.467400 | orchestrator | + set -e 2026-03-28 00:15:25.467653 | orchestrator | + VERSION=2025.1 2026-03-28 00:15:25.468375 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2026-03-28 00:15:25.472659 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2026-03-28 00:15:25.472698 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2025.1/g' /opt/configuration/environments/manager/configuration.yml 2026-03-28 00:15:25.477563 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-03-28 00:15:25.478349 | orchestrator | ++ semver latest 7.0.0 2026-03-28 00:15:25.535790 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-28 00:15:25.535887 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-28 00:15:25.535902 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-03-28 00:15:25.536841 | orchestrator | ++ semver latest 10.0.0-0 2026-03-28 00:15:25.601508 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-28 00:15:25.602348 | orchestrator | ++ semver 2025.1 2025.1 2026-03-28 00:15:25.687745 | orchestrator | + [[ 0 -ge 0 ]] 2026-03-28 00:15:25.687863 | orchestrator | + sed -i '/^om_enable_rabbitmq_high_availability:/d' /opt/configuration/environments/kolla/configuration.yml 2026-03-28 00:15:25.694751 | orchestrator | + sed -i '/^om_enable_rabbitmq_quorum_queues:/d' /opt/configuration/environments/kolla/configuration.yml 2026-03-28 00:15:25.700040 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-03-28 00:15:25.797866 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-28 00:15:25.799762 | orchestrator | + source /opt/venv/bin/activate 2026-03-28 00:15:25.800933 | orchestrator | ++ deactivate nondestructive 2026-03-28 00:15:25.800992 | orchestrator | ++ '[' -n '' ']' 2026-03-28 00:15:25.801105 | orchestrator | ++ '[' -n '' ']' 2026-03-28 00:15:25.801156 | orchestrator | ++ hash -r 2026-03-28 00:15:25.801243 | orchestrator | ++ '[' -n '' ']' 2026-03-28 00:15:25.801338 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-28 00:15:25.801351 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-28 00:15:25.801398 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-28 00:15:25.801635 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-28 00:15:25.801649 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-28 00:15:25.801675 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-28 00:15:25.801692 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-28 00:15:25.801714 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-28 00:15:25.802079 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-28 00:15:25.802104 | orchestrator | ++ export PATH 2026-03-28 00:15:25.802121 | orchestrator | ++ '[' -n '' ']' 2026-03-28 00:15:25.802137 | orchestrator | ++ '[' -z '' ']' 2026-03-28 00:15:25.802152 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-28 00:15:25.802167 | orchestrator | ++ PS1='(venv) ' 2026-03-28 00:15:25.802185 | orchestrator | ++ export PS1 2026-03-28 00:15:25.802202 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-28 00:15:25.802219 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-28 00:15:25.802230 | orchestrator | ++ hash -r 2026-03-28 00:15:25.802357 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-03-28 00:15:27.164731 | orchestrator | 2026-03-28 00:15:27.164839 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-03-28 00:15:27.164855 | orchestrator | 2026-03-28 00:15:27.164867 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-28 00:15:27.748994 | orchestrator | ok: [testbed-manager] 2026-03-28 00:15:27.749102 | orchestrator | 2026-03-28 00:15:27.749118 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-28 00:15:28.807092 | orchestrator | changed: [testbed-manager] 2026-03-28 00:15:28.807208 | orchestrator | 2026-03-28 00:15:28.807235 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-03-28 00:15:28.807255 | orchestrator | 2026-03-28 00:15:28.807275 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-28 00:15:31.439605 | orchestrator | ok: [testbed-manager] 2026-03-28 00:15:31.439705 | orchestrator | 2026-03-28 00:15:31.439721 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-03-28 00:15:31.494793 | orchestrator | ok: [testbed-manager] 2026-03-28 00:15:31.494900 | orchestrator | 2026-03-28 00:15:31.494918 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-03-28 00:15:31.953151 | orchestrator | changed: [testbed-manager] 2026-03-28 00:15:31.953251 | orchestrator | 2026-03-28 00:15:31.953269 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-03-28 00:15:31.989705 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:15:31.989794 | orchestrator | 2026-03-28 00:15:31.989808 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-28 00:15:32.355049 | orchestrator | changed: [testbed-manager] 2026-03-28 00:15:32.355218 | orchestrator | 2026-03-28 00:15:32.355249 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-03-28 00:15:32.704973 | orchestrator | ok: [testbed-manager] 2026-03-28 00:15:32.705084 | orchestrator | 2026-03-28 00:15:32.705101 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-03-28 00:15:32.830751 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:15:32.830844 | orchestrator | 2026-03-28 00:15:32.830861 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-03-28 00:15:32.830873 | orchestrator | 2026-03-28 00:15:32.830884 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-28 00:15:34.654315 | orchestrator | ok: [testbed-manager] 2026-03-28 00:15:34.654435 | orchestrator | 2026-03-28 00:15:34.654461 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-03-28 00:15:34.753230 | orchestrator | included: osism.services.traefik for testbed-manager 2026-03-28 00:15:34.753353 | orchestrator | 2026-03-28 00:15:34.753380 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-03-28 00:15:34.823847 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-03-28 00:15:34.823984 | orchestrator | 2026-03-28 00:15:34.824003 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-03-28 00:15:36.029640 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-03-28 00:15:36.029728 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-03-28 00:15:36.029740 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-03-28 00:15:36.029751 | orchestrator | 2026-03-28 00:15:36.029763 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-03-28 00:15:37.919411 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-03-28 00:15:37.919525 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-03-28 00:15:37.919541 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-03-28 00:15:37.919554 | orchestrator | 2026-03-28 00:15:37.919566 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-03-28 00:15:38.574014 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-28 00:15:38.574148 | orchestrator | changed: [testbed-manager] 2026-03-28 00:15:38.574161 | orchestrator | 2026-03-28 00:15:38.574171 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-03-28 00:15:39.247984 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-28 00:15:39.248105 | orchestrator | changed: [testbed-manager] 2026-03-28 00:15:39.248123 | orchestrator | 2026-03-28 00:15:39.248136 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-03-28 00:15:39.312903 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:15:39.313084 | orchestrator | 2026-03-28 00:15:39.313111 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-03-28 00:15:39.677528 | orchestrator | ok: [testbed-manager] 2026-03-28 00:15:39.677626 | orchestrator | 2026-03-28 00:15:39.677649 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-03-28 00:15:39.757805 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-03-28 00:15:39.757895 | orchestrator | 2026-03-28 00:15:39.757969 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-03-28 00:15:40.935987 | orchestrator | changed: [testbed-manager] 2026-03-28 00:15:40.936123 | orchestrator | 2026-03-28 00:15:40.936153 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-03-28 00:15:41.800332 | orchestrator | changed: [testbed-manager] 2026-03-28 00:15:41.800449 | orchestrator | 2026-03-28 00:15:41.800476 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-03-28 00:15:52.544669 | orchestrator | changed: [testbed-manager] 2026-03-28 00:15:52.544774 | orchestrator | 2026-03-28 00:15:52.544792 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-03-28 00:15:52.605868 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:15:52.605978 | orchestrator | 2026-03-28 00:15:52.605992 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-03-28 00:15:52.606095 | orchestrator | 2026-03-28 00:15:52.606111 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-28 00:15:54.555800 | orchestrator | ok: [testbed-manager] 2026-03-28 00:15:54.555952 | orchestrator | 2026-03-28 00:15:54.555970 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-03-28 00:15:54.694299 | orchestrator | included: osism.services.manager for testbed-manager 2026-03-28 00:15:54.694398 | orchestrator | 2026-03-28 00:15:54.694413 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-03-28 00:15:54.765724 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-03-28 00:15:54.765822 | orchestrator | 2026-03-28 00:15:54.765840 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-03-28 00:15:57.380807 | orchestrator | ok: [testbed-manager] 2026-03-28 00:15:57.380951 | orchestrator | 2026-03-28 00:15:57.380972 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-03-28 00:15:57.438591 | orchestrator | ok: [testbed-manager] 2026-03-28 00:15:57.438691 | orchestrator | 2026-03-28 00:15:57.438707 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-03-28 00:15:57.575134 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-03-28 00:15:57.575242 | orchestrator | 2026-03-28 00:15:57.575257 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-03-28 00:16:00.522059 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-03-28 00:16:00.522148 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-03-28 00:16:00.522161 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-03-28 00:16:00.522172 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-03-28 00:16:00.522182 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-03-28 00:16:00.522192 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-03-28 00:16:00.522202 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-03-28 00:16:00.522212 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-03-28 00:16:00.522222 | orchestrator | 2026-03-28 00:16:00.522233 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-03-28 00:16:01.191197 | orchestrator | changed: [testbed-manager] 2026-03-28 00:16:01.191298 | orchestrator | 2026-03-28 00:16:01.191314 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-03-28 00:16:01.845739 | orchestrator | changed: [testbed-manager] 2026-03-28 00:16:01.845866 | orchestrator | 2026-03-28 00:16:01.845916 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-03-28 00:16:01.950834 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-03-28 00:16:01.950964 | orchestrator | 2026-03-28 00:16:01.950984 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-03-28 00:16:03.202278 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-03-28 00:16:03.202382 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-03-28 00:16:03.202398 | orchestrator | 2026-03-28 00:16:03.202411 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-03-28 00:16:03.934764 | orchestrator | changed: [testbed-manager] 2026-03-28 00:16:03.934862 | orchestrator | 2026-03-28 00:16:03.934916 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-03-28 00:16:03.990415 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:16:03.990510 | orchestrator | 2026-03-28 00:16:03.990525 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-03-28 00:16:04.086332 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-03-28 00:16:04.086409 | orchestrator | 2026-03-28 00:16:04.086417 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-03-28 00:16:04.726722 | orchestrator | changed: [testbed-manager] 2026-03-28 00:16:04.726847 | orchestrator | 2026-03-28 00:16:04.726936 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-03-28 00:16:04.803328 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-03-28 00:16:04.803420 | orchestrator | 2026-03-28 00:16:04.803434 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-03-28 00:16:06.193751 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-28 00:16:06.193844 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-28 00:16:06.193857 | orchestrator | changed: [testbed-manager] 2026-03-28 00:16:06.193904 | orchestrator | 2026-03-28 00:16:06.193917 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-03-28 00:16:06.843170 | orchestrator | changed: [testbed-manager] 2026-03-28 00:16:06.843281 | orchestrator | 2026-03-28 00:16:06.843297 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-03-28 00:16:06.908027 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:16:06.908108 | orchestrator | 2026-03-28 00:16:06.908120 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-03-28 00:16:07.022335 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-03-28 00:16:07.022454 | orchestrator | 2026-03-28 00:16:07.022470 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-03-28 00:16:07.591724 | orchestrator | changed: [testbed-manager] 2026-03-28 00:16:07.591833 | orchestrator | 2026-03-28 00:16:07.591907 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-03-28 00:16:08.012312 | orchestrator | changed: [testbed-manager] 2026-03-28 00:16:08.012428 | orchestrator | 2026-03-28 00:16:08.012453 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-03-28 00:16:09.282600 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-03-28 00:16:09.282692 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-03-28 00:16:09.282706 | orchestrator | 2026-03-28 00:16:09.282717 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-03-28 00:16:09.961045 | orchestrator | changed: [testbed-manager] 2026-03-28 00:16:09.961160 | orchestrator | 2026-03-28 00:16:09.961186 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-03-28 00:16:10.333741 | orchestrator | ok: [testbed-manager] 2026-03-28 00:16:10.333835 | orchestrator | 2026-03-28 00:16:10.333850 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-03-28 00:16:10.706964 | orchestrator | changed: [testbed-manager] 2026-03-28 00:16:10.707057 | orchestrator | 2026-03-28 00:16:10.707073 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-03-28 00:16:10.759941 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:16:10.760059 | orchestrator | 2026-03-28 00:16:10.760085 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-03-28 00:16:10.854153 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-03-28 00:16:10.854265 | orchestrator | 2026-03-28 00:16:10.854282 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-03-28 00:16:10.914703 | orchestrator | ok: [testbed-manager] 2026-03-28 00:16:10.914786 | orchestrator | 2026-03-28 00:16:10.914793 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-03-28 00:16:12.990077 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-03-28 00:16:12.990180 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-03-28 00:16:12.990195 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-03-28 00:16:12.990208 | orchestrator | 2026-03-28 00:16:12.990221 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-03-28 00:16:13.756500 | orchestrator | changed: [testbed-manager] 2026-03-28 00:16:13.756592 | orchestrator | 2026-03-28 00:16:13.756609 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-03-28 00:16:14.512915 | orchestrator | changed: [testbed-manager] 2026-03-28 00:16:14.513043 | orchestrator | 2026-03-28 00:16:14.513061 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-03-28 00:16:15.295360 | orchestrator | changed: [testbed-manager] 2026-03-28 00:16:15.295466 | orchestrator | 2026-03-28 00:16:15.295483 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-03-28 00:16:15.376224 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-03-28 00:16:15.376288 | orchestrator | 2026-03-28 00:16:15.376294 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-03-28 00:16:15.431331 | orchestrator | ok: [testbed-manager] 2026-03-28 00:16:15.431428 | orchestrator | 2026-03-28 00:16:15.431441 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-03-28 00:16:16.147122 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-03-28 00:16:16.147201 | orchestrator | 2026-03-28 00:16:16.147210 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-03-28 00:16:16.229770 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-03-28 00:16:16.229942 | orchestrator | 2026-03-28 00:16:16.229965 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-03-28 00:16:16.972038 | orchestrator | changed: [testbed-manager] 2026-03-28 00:16:16.972134 | orchestrator | 2026-03-28 00:16:16.972150 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-03-28 00:16:17.677282 | orchestrator | ok: [testbed-manager] 2026-03-28 00:16:17.677400 | orchestrator | 2026-03-28 00:16:17.677426 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-03-28 00:16:17.729776 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:16:17.729930 | orchestrator | 2026-03-28 00:16:17.729949 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-03-28 00:16:17.793472 | orchestrator | ok: [testbed-manager] 2026-03-28 00:16:17.793565 | orchestrator | 2026-03-28 00:16:17.793581 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-03-28 00:16:18.672556 | orchestrator | changed: [testbed-manager] 2026-03-28 00:16:18.672670 | orchestrator | 2026-03-28 00:16:18.672687 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-03-28 00:17:36.793082 | orchestrator | changed: [testbed-manager] 2026-03-28 00:17:36.793227 | orchestrator | 2026-03-28 00:17:36.793247 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-03-28 00:17:37.822533 | orchestrator | ok: [testbed-manager] 2026-03-28 00:17:37.822664 | orchestrator | 2026-03-28 00:17:37.822683 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-03-28 00:17:37.882696 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:17:37.882839 | orchestrator | 2026-03-28 00:17:37.882880 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-03-28 00:17:40.221607 | orchestrator | changed: [testbed-manager] 2026-03-28 00:17:40.221769 | orchestrator | 2026-03-28 00:17:40.221802 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-03-28 00:17:40.350191 | orchestrator | ok: [testbed-manager] 2026-03-28 00:17:40.350317 | orchestrator | 2026-03-28 00:17:40.350334 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-28 00:17:40.350346 | orchestrator | 2026-03-28 00:17:40.350358 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-03-28 00:17:40.407152 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:17:40.407271 | orchestrator | 2026-03-28 00:17:40.407288 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-03-28 00:18:40.451601 | orchestrator | Pausing for 60 seconds 2026-03-28 00:18:40.451756 | orchestrator | changed: [testbed-manager] 2026-03-28 00:18:40.451772 | orchestrator | 2026-03-28 00:18:40.451786 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-03-28 00:18:43.550153 | orchestrator | changed: [testbed-manager] 2026-03-28 00:18:43.550295 | orchestrator | 2026-03-28 00:18:43.550320 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-03-28 00:19:45.703498 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-03-28 00:19:45.703639 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-03-28 00:19:45.703655 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-03-28 00:19:45.703667 | orchestrator | changed: [testbed-manager] 2026-03-28 00:19:45.703680 | orchestrator | 2026-03-28 00:19:45.703692 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-03-28 00:19:51.795190 | orchestrator | changed: [testbed-manager] 2026-03-28 00:19:51.795291 | orchestrator | 2026-03-28 00:19:51.795307 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-03-28 00:19:51.877169 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-03-28 00:19:51.877270 | orchestrator | 2026-03-28 00:19:51.877286 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-28 00:19:51.877299 | orchestrator | 2026-03-28 00:19:51.877311 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-03-28 00:19:51.932413 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:19:51.932590 | orchestrator | 2026-03-28 00:19:51.932620 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-03-28 00:19:52.003059 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-03-28 00:19:52.003164 | orchestrator | 2026-03-28 00:19:52.003181 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-03-28 00:19:52.834994 | orchestrator | changed: [testbed-manager] 2026-03-28 00:19:52.835095 | orchestrator | 2026-03-28 00:19:52.835120 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-03-28 00:19:56.167445 | orchestrator | ok: [testbed-manager] 2026-03-28 00:19:56.167606 | orchestrator | 2026-03-28 00:19:56.167632 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-03-28 00:19:56.244683 | orchestrator | ok: [testbed-manager] => { 2026-03-28 00:19:56.244780 | orchestrator | "version_check_result.stdout_lines": [ 2026-03-28 00:19:56.244795 | orchestrator | "=== OSISM Container Version Check ===", 2026-03-28 00:19:56.244806 | orchestrator | "Checking running containers against expected versions...", 2026-03-28 00:19:56.244818 | orchestrator | "", 2026-03-28 00:19:56.244830 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-03-28 00:19:56.244842 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:latest", 2026-03-28 00:19:56.244853 | orchestrator | " Enabled: true", 2026-03-28 00:19:56.244864 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:latest", 2026-03-28 00:19:56.244874 | orchestrator | " Status: ✅ MATCH", 2026-03-28 00:19:56.244886 | orchestrator | "", 2026-03-28 00:19:56.244897 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-03-28 00:19:56.244908 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:latest", 2026-03-28 00:19:56.244919 | orchestrator | " Enabled: true", 2026-03-28 00:19:56.244930 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:latest", 2026-03-28 00:19:56.244941 | orchestrator | " Status: ✅ MATCH", 2026-03-28 00:19:56.244951 | orchestrator | "", 2026-03-28 00:19:56.244962 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-03-28 00:19:56.244973 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:latest", 2026-03-28 00:19:56.244984 | orchestrator | " Enabled: true", 2026-03-28 00:19:56.244995 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:latest", 2026-03-28 00:19:56.245005 | orchestrator | " Status: ✅ MATCH", 2026-03-28 00:19:56.245017 | orchestrator | "", 2026-03-28 00:19:56.245028 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-03-28 00:19:56.245039 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:reef", 2026-03-28 00:19:56.245049 | orchestrator | " Enabled: true", 2026-03-28 00:19:56.245101 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:reef", 2026-03-28 00:19:56.245113 | orchestrator | " Status: ✅ MATCH", 2026-03-28 00:19:56.245124 | orchestrator | "", 2026-03-28 00:19:56.245134 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-03-28 00:19:56.245145 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:2025.1", 2026-03-28 00:19:56.245156 | orchestrator | " Enabled: true", 2026-03-28 00:19:56.245167 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:2025.1", 2026-03-28 00:19:56.245177 | orchestrator | " Status: ✅ MATCH", 2026-03-28 00:19:56.245188 | orchestrator | "", 2026-03-28 00:19:56.245199 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-03-28 00:19:56.245209 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-28 00:19:56.245220 | orchestrator | " Enabled: true", 2026-03-28 00:19:56.245231 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-28 00:19:56.245241 | orchestrator | " Status: ✅ MATCH", 2026-03-28 00:19:56.245252 | orchestrator | "", 2026-03-28 00:19:56.245263 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-03-28 00:19:56.245274 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-28 00:19:56.245284 | orchestrator | " Enabled: true", 2026-03-28 00:19:56.245295 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-28 00:19:56.245306 | orchestrator | " Status: ✅ MATCH", 2026-03-28 00:19:56.245316 | orchestrator | "", 2026-03-28 00:19:56.245327 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-03-28 00:19:56.245347 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-28 00:19:56.245359 | orchestrator | " Enabled: true", 2026-03-28 00:19:56.245369 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-28 00:19:56.245380 | orchestrator | " Status: ✅ MATCH", 2026-03-28 00:19:56.245396 | orchestrator | "", 2026-03-28 00:19:56.245407 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-03-28 00:19:56.245418 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:latest", 2026-03-28 00:19:56.245429 | orchestrator | " Enabled: true", 2026-03-28 00:19:56.245440 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:latest", 2026-03-28 00:19:56.245451 | orchestrator | " Status: ✅ MATCH", 2026-03-28 00:19:56.245462 | orchestrator | "", 2026-03-28 00:19:56.245472 | orchestrator | "Checking service: redis (Redis Cache)", 2026-03-28 00:19:56.245483 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-28 00:19:56.245496 | orchestrator | " Enabled: true", 2026-03-28 00:19:56.245516 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-28 00:19:56.245697 | orchestrator | " Status: ✅ MATCH", 2026-03-28 00:19:56.245723 | orchestrator | "", 2026-03-28 00:19:56.245734 | orchestrator | "Checking service: api (OSISM API Service)", 2026-03-28 00:19:56.245745 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-28 00:19:56.245756 | orchestrator | " Enabled: true", 2026-03-28 00:19:56.245767 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-28 00:19:56.245777 | orchestrator | " Status: ✅ MATCH", 2026-03-28 00:19:56.245788 | orchestrator | "", 2026-03-28 00:19:56.245799 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-03-28 00:19:56.245810 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-28 00:19:56.245821 | orchestrator | " Enabled: true", 2026-03-28 00:19:56.245831 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-28 00:19:56.245842 | orchestrator | " Status: ✅ MATCH", 2026-03-28 00:19:56.245853 | orchestrator | "", 2026-03-28 00:19:56.245863 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-03-28 00:19:56.245874 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-28 00:19:56.245885 | orchestrator | " Enabled: true", 2026-03-28 00:19:56.245895 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-28 00:19:56.245906 | orchestrator | " Status: ✅ MATCH", 2026-03-28 00:19:56.245932 | orchestrator | "", 2026-03-28 00:19:56.245943 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-03-28 00:19:56.245954 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-28 00:19:56.245965 | orchestrator | " Enabled: true", 2026-03-28 00:19:56.245976 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-28 00:19:56.245987 | orchestrator | " Status: ✅ MATCH", 2026-03-28 00:19:56.245998 | orchestrator | "", 2026-03-28 00:19:56.246008 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-03-28 00:19:56.246102 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-28 00:19:56.246115 | orchestrator | " Enabled: true", 2026-03-28 00:19:56.246126 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-28 00:19:56.246137 | orchestrator | " Status: ✅ MATCH", 2026-03-28 00:19:56.246147 | orchestrator | "", 2026-03-28 00:19:56.246158 | orchestrator | "=== Summary ===", 2026-03-28 00:19:56.246169 | orchestrator | "Errors (version mismatches): 0", 2026-03-28 00:19:56.246179 | orchestrator | "Warnings (expected containers not running): 0", 2026-03-28 00:19:56.246190 | orchestrator | "", 2026-03-28 00:19:56.246201 | orchestrator | "✅ All running containers match expected versions!" 2026-03-28 00:19:56.246212 | orchestrator | ] 2026-03-28 00:19:56.246223 | orchestrator | } 2026-03-28 00:19:56.246233 | orchestrator | 2026-03-28 00:19:56.246245 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-03-28 00:19:56.313252 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:19:56.313329 | orchestrator | 2026-03-28 00:19:56.313336 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:19:56.313344 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-03-28 00:19:56.313348 | orchestrator | 2026-03-28 00:19:56.419803 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-28 00:19:56.419878 | orchestrator | + deactivate 2026-03-28 00:19:56.419887 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-28 00:19:56.419895 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-28 00:19:56.419901 | orchestrator | + export PATH 2026-03-28 00:19:56.419907 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-28 00:19:56.419914 | orchestrator | + '[' -n '' ']' 2026-03-28 00:19:56.419920 | orchestrator | + hash -r 2026-03-28 00:19:56.419926 | orchestrator | + '[' -n '' ']' 2026-03-28 00:19:56.419931 | orchestrator | + unset VIRTUAL_ENV 2026-03-28 00:19:56.419937 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-28 00:19:56.419943 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-28 00:19:56.419949 | orchestrator | + unset -f deactivate 2026-03-28 00:19:56.419993 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-03-28 00:19:56.427108 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-28 00:19:56.427196 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-28 00:19:56.427212 | orchestrator | + local max_attempts=60 2026-03-28 00:19:56.427224 | orchestrator | + local name=ceph-ansible 2026-03-28 00:19:56.427235 | orchestrator | + local attempt_num=1 2026-03-28 00:19:56.428158 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-28 00:19:56.467787 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-28 00:19:56.467906 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-28 00:19:56.467929 | orchestrator | + local max_attempts=60 2026-03-28 00:19:56.467949 | orchestrator | + local name=kolla-ansible 2026-03-28 00:19:56.467970 | orchestrator | + local attempt_num=1 2026-03-28 00:19:56.469266 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-28 00:19:56.501449 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-28 00:19:56.501557 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-28 00:19:56.501573 | orchestrator | + local max_attempts=60 2026-03-28 00:19:56.501585 | orchestrator | + local name=osism-ansible 2026-03-28 00:19:56.501596 | orchestrator | + local attempt_num=1 2026-03-28 00:19:56.502303 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-28 00:19:56.539316 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-28 00:19:56.539403 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-28 00:19:56.539448 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-28 00:19:57.267985 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-03-28 00:19:57.444047 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-03-28 00:19:57.444152 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-03-28 00:19:57.444166 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2025.1 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-03-28 00:19:57.444178 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-03-28 00:19:57.444191 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2026-03-28 00:19:57.444203 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2026-03-28 00:19:57.444214 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2026-03-28 00:19:57.444246 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2026-03-28 00:19:57.444258 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2026-03-28 00:19:57.444269 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2026-03-28 00:19:57.444279 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2026-03-28 00:19:57.444290 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2026-03-28 00:19:57.444301 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-03-28 00:19:57.444312 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2026-03-28 00:19:57.444323 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-03-28 00:19:57.444334 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2026-03-28 00:19:57.450671 | orchestrator | ++ semver latest 7.0.0 2026-03-28 00:19:57.517560 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-28 00:19:57.517645 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-28 00:19:57.517660 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-03-28 00:19:57.522618 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-03-28 00:20:10.060413 | orchestrator | 2026-03-28 00:20:10 | INFO  | Prepare task for execution of resolvconf. 2026-03-28 00:20:10.246058 | orchestrator | 2026-03-28 00:20:10 | INFO  | Task a8a0d82a-0683-4af1-a01b-bb25a253ab1e (resolvconf) was prepared for execution. 2026-03-28 00:20:10.246138 | orchestrator | 2026-03-28 00:20:10 | INFO  | It takes a moment until task a8a0d82a-0683-4af1-a01b-bb25a253ab1e (resolvconf) has been started and output is visible here. 2026-03-28 00:20:23.557284 | orchestrator | 2026-03-28 00:20:23.557436 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-03-28 00:20:23.557468 | orchestrator | 2026-03-28 00:20:23.557488 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-28 00:20:23.557550 | orchestrator | Saturday 28 March 2026 00:20:13 +0000 (0:00:00.185) 0:00:00.185 ******** 2026-03-28 00:20:23.557562 | orchestrator | ok: [testbed-manager] 2026-03-28 00:20:23.557575 | orchestrator | 2026-03-28 00:20:23.557586 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-28 00:20:23.557599 | orchestrator | Saturday 28 March 2026 00:20:17 +0000 (0:00:04.028) 0:00:04.213 ******** 2026-03-28 00:20:23.557610 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:20:23.557622 | orchestrator | 2026-03-28 00:20:23.557633 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-28 00:20:23.557644 | orchestrator | Saturday 28 March 2026 00:20:17 +0000 (0:00:00.055) 0:00:04.269 ******** 2026-03-28 00:20:23.557655 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-03-28 00:20:23.557667 | orchestrator | 2026-03-28 00:20:23.557678 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-28 00:20:23.557701 | orchestrator | Saturday 28 March 2026 00:20:17 +0000 (0:00:00.082) 0:00:04.352 ******** 2026-03-28 00:20:23.557712 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-03-28 00:20:23.557724 | orchestrator | 2026-03-28 00:20:23.557735 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-28 00:20:23.557745 | orchestrator | Saturday 28 March 2026 00:20:17 +0000 (0:00:00.090) 0:00:04.442 ******** 2026-03-28 00:20:23.557757 | orchestrator | ok: [testbed-manager] 2026-03-28 00:20:23.557768 | orchestrator | 2026-03-28 00:20:23.557779 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-28 00:20:23.557792 | orchestrator | Saturday 28 March 2026 00:20:18 +0000 (0:00:01.259) 0:00:05.702 ******** 2026-03-28 00:20:23.557805 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:20:23.557817 | orchestrator | 2026-03-28 00:20:23.557830 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-28 00:20:23.557842 | orchestrator | Saturday 28 March 2026 00:20:18 +0000 (0:00:00.058) 0:00:05.761 ******** 2026-03-28 00:20:23.557855 | orchestrator | ok: [testbed-manager] 2026-03-28 00:20:23.557867 | orchestrator | 2026-03-28 00:20:23.557880 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-28 00:20:23.557892 | orchestrator | Saturday 28 March 2026 00:20:19 +0000 (0:00:00.505) 0:00:06.266 ******** 2026-03-28 00:20:23.557905 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:20:23.557917 | orchestrator | 2026-03-28 00:20:23.557930 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-28 00:20:23.557943 | orchestrator | Saturday 28 March 2026 00:20:19 +0000 (0:00:00.085) 0:00:06.352 ******** 2026-03-28 00:20:23.557955 | orchestrator | changed: [testbed-manager] 2026-03-28 00:20:23.557980 | orchestrator | 2026-03-28 00:20:23.557994 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-28 00:20:23.558006 | orchestrator | Saturday 28 March 2026 00:20:20 +0000 (0:00:00.549) 0:00:06.901 ******** 2026-03-28 00:20:23.558088 | orchestrator | changed: [testbed-manager] 2026-03-28 00:20:23.558102 | orchestrator | 2026-03-28 00:20:23.558146 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-28 00:20:23.558214 | orchestrator | Saturday 28 March 2026 00:20:21 +0000 (0:00:01.082) 0:00:07.984 ******** 2026-03-28 00:20:23.558226 | orchestrator | ok: [testbed-manager] 2026-03-28 00:20:23.558237 | orchestrator | 2026-03-28 00:20:23.558248 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-28 00:20:23.558259 | orchestrator | Saturday 28 March 2026 00:20:22 +0000 (0:00:00.943) 0:00:08.927 ******** 2026-03-28 00:20:23.558270 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-03-28 00:20:23.558282 | orchestrator | 2026-03-28 00:20:23.558293 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-28 00:20:23.558303 | orchestrator | Saturday 28 March 2026 00:20:22 +0000 (0:00:00.073) 0:00:09.000 ******** 2026-03-28 00:20:23.558315 | orchestrator | changed: [testbed-manager] 2026-03-28 00:20:23.558326 | orchestrator | 2026-03-28 00:20:23.558336 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:20:23.558349 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-28 00:20:23.558360 | orchestrator | 2026-03-28 00:20:23.558370 | orchestrator | 2026-03-28 00:20:23.558381 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:20:23.558392 | orchestrator | Saturday 28 March 2026 00:20:23 +0000 (0:00:01.203) 0:00:10.204 ******** 2026-03-28 00:20:23.558403 | orchestrator | =============================================================================== 2026-03-28 00:20:23.558413 | orchestrator | Gathering Facts --------------------------------------------------------- 4.03s 2026-03-28 00:20:23.558424 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.26s 2026-03-28 00:20:23.558435 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.20s 2026-03-28 00:20:23.558445 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.08s 2026-03-28 00:20:23.558456 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.94s 2026-03-28 00:20:23.558467 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.55s 2026-03-28 00:20:23.558528 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.51s 2026-03-28 00:20:23.558543 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.09s 2026-03-28 00:20:23.558554 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.09s 2026-03-28 00:20:23.558565 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2026-03-28 00:20:23.558575 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.07s 2026-03-28 00:20:23.558594 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2026-03-28 00:20:23.558605 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2026-03-28 00:20:23.735124 | orchestrator | + osism apply sshconfig 2026-03-28 00:20:35.150697 | orchestrator | 2026-03-28 00:20:35 | INFO  | Prepare task for execution of sshconfig. 2026-03-28 00:20:35.223999 | orchestrator | 2026-03-28 00:20:35 | INFO  | Task 70140b83-5f9b-4b41-b834-61bb13b48a27 (sshconfig) was prepared for execution. 2026-03-28 00:20:35.224085 | orchestrator | 2026-03-28 00:20:35 | INFO  | It takes a moment until task 70140b83-5f9b-4b41-b834-61bb13b48a27 (sshconfig) has been started and output is visible here. 2026-03-28 00:20:46.250490 | orchestrator | 2026-03-28 00:20:46.250599 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-03-28 00:20:46.250613 | orchestrator | 2026-03-28 00:20:46.250623 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-03-28 00:20:46.250632 | orchestrator | Saturday 28 March 2026 00:20:38 +0000 (0:00:00.176) 0:00:00.176 ******** 2026-03-28 00:20:46.250665 | orchestrator | ok: [testbed-manager] 2026-03-28 00:20:46.250676 | orchestrator | 2026-03-28 00:20:46.250685 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-03-28 00:20:46.250693 | orchestrator | Saturday 28 March 2026 00:20:39 +0000 (0:00:00.919) 0:00:01.096 ******** 2026-03-28 00:20:46.250702 | orchestrator | changed: [testbed-manager] 2026-03-28 00:20:46.250711 | orchestrator | 2026-03-28 00:20:46.250720 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-03-28 00:20:46.250728 | orchestrator | Saturday 28 March 2026 00:20:39 +0000 (0:00:00.492) 0:00:01.589 ******** 2026-03-28 00:20:46.250737 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-03-28 00:20:46.250746 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-03-28 00:20:46.250755 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-03-28 00:20:46.250764 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-03-28 00:20:46.250772 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-03-28 00:20:46.250781 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-03-28 00:20:46.250789 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-03-28 00:20:46.250797 | orchestrator | 2026-03-28 00:20:46.250806 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-03-28 00:20:46.250815 | orchestrator | Saturday 28 March 2026 00:20:45 +0000 (0:00:05.723) 0:00:07.312 ******** 2026-03-28 00:20:46.250823 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:20:46.250832 | orchestrator | 2026-03-28 00:20:46.250841 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-03-28 00:20:46.250850 | orchestrator | Saturday 28 March 2026 00:20:45 +0000 (0:00:00.112) 0:00:07.424 ******** 2026-03-28 00:20:46.250858 | orchestrator | changed: [testbed-manager] 2026-03-28 00:20:46.250867 | orchestrator | 2026-03-28 00:20:46.250876 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:20:46.250886 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 00:20:46.250896 | orchestrator | 2026-03-28 00:20:46.250904 | orchestrator | 2026-03-28 00:20:46.250913 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:20:46.250932 | orchestrator | Saturday 28 March 2026 00:20:46 +0000 (0:00:00.614) 0:00:08.039 ******** 2026-03-28 00:20:46.250941 | orchestrator | =============================================================================== 2026-03-28 00:20:46.250949 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.72s 2026-03-28 00:20:46.250958 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.92s 2026-03-28 00:20:46.250966 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.61s 2026-03-28 00:20:46.250975 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.49s 2026-03-28 00:20:46.250983 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.11s 2026-03-28 00:20:46.451027 | orchestrator | + osism apply known-hosts 2026-03-28 00:20:57.760979 | orchestrator | 2026-03-28 00:20:57 | INFO  | Prepare task for execution of known-hosts. 2026-03-28 00:20:57.837045 | orchestrator | 2026-03-28 00:20:57 | INFO  | Task 4bf21de1-cc37-4686-9b65-1dec41e82637 (known-hosts) was prepared for execution. 2026-03-28 00:20:57.837148 | orchestrator | 2026-03-28 00:20:57 | INFO  | It takes a moment until task 4bf21de1-cc37-4686-9b65-1dec41e82637 (known-hosts) has been started and output is visible here. 2026-03-28 00:21:13.989091 | orchestrator | 2026-03-28 00:21:13.989227 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-03-28 00:21:13.989255 | orchestrator | 2026-03-28 00:21:13.989275 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-03-28 00:21:13.989331 | orchestrator | Saturday 28 March 2026 00:21:01 +0000 (0:00:00.197) 0:00:00.197 ******** 2026-03-28 00:21:13.989352 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-28 00:21:13.989370 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-28 00:21:13.989389 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-28 00:21:13.989405 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-28 00:21:13.989422 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-28 00:21:13.989495 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-28 00:21:13.989533 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-28 00:21:13.989552 | orchestrator | 2026-03-28 00:21:13.989574 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-03-28 00:21:13.989596 | orchestrator | Saturday 28 March 2026 00:21:07 +0000 (0:00:06.567) 0:00:06.765 ******** 2026-03-28 00:21:13.989617 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-28 00:21:13.989640 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-28 00:21:13.989660 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-28 00:21:13.989679 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-28 00:21:13.989700 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-28 00:21:13.989719 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-28 00:21:13.989740 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-28 00:21:13.989760 | orchestrator | 2026-03-28 00:21:13.989778 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-28 00:21:13.989797 | orchestrator | Saturday 28 March 2026 00:21:07 +0000 (0:00:00.172) 0:00:06.937 ******** 2026-03-28 00:21:13.989814 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINZtXRf2Eot6ibT7DoBpA4AavU/FsfrtJgWkiqvLGVpZ) 2026-03-28 00:21:13.989839 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCZuY9ST4pUn9IVm8X4MdTI1/88+bsCTOXG0teRQFkE6iksZaszj6w33W8LGbeSg3C6oH8k7WOo/dTqdWAPuuBgnaVYoaPjlOCRYRHptxID5MGNo8BuRZwhsdUUF27aakSNFPGnIStjCrnnwQZC51NWe571u3OLw17ZhaFuI3JLFyIGySajn1JX+GHiIX8wqXUupm2iGwopKqzrgvrkbbJkxzIUHRRKzNlNSW22l2NFNSfkpDI+LXLvk7GEIg6FzqwrxY4jfisr87DFJRFl5Posk+3j8HSG+3m6PthBYoCrh4xsNIQOelo4hj8nuRn/EDci2MT2haabeFXcZvSx+HatOwF7dzQkicU6BQmdIprrCll2oCWkvsBXVOZuIO0r2VhF4B0KRYod6tRaqU8fFEcRuGxiGUm8Ms0ESEdVMhPtKOBQkbezM57SHWQBi/davQpV7oFZrOdZCAqf/tNqpnr0CBCC4vcd6hogZoxZQEA6pwCrUe6/ynAWLGVvWiRQLAE=) 2026-03-28 00:21:13.989864 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNoq/nFATrHDuW52p9E7mc5fBJqpp4XOkBY0Qgl6i+CYoGzjGK2nHLXEkNRm8YAzGBxCHw4rSgXuSYLacN/Ga/w=) 2026-03-28 00:21:13.989886 | orchestrator | 2026-03-28 00:21:13.989906 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-28 00:21:13.989926 | orchestrator | Saturday 28 March 2026 00:21:09 +0000 (0:00:01.324) 0:00:08.262 ******** 2026-03-28 00:21:13.989999 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCrug5LXWaZ3peEEpvx+Ldt27SPshmSeCGm621hBZOHbP6hCtd53OjcPLmzR6cNO8W5FW+i+5UN3SEXUDPJrFUcE7dsfKM4nJDagh7Eg3k4e1ECBhVgURa85YYNvzfI+/n9Ug7f5uRDNGRtt3G+2Z0NVxbcKtEqwoNjWqs8u6rUa/Fq57of+UaHTkuO8D2fqFO/d4Anv/DE0a9KMqkRtA6EtU2mYNCcPjq4Uo4wYs7if9NyTmUBLGZiDxjXhPfMNNizoNOF5cWBfIGT84npVtG5p3L3ujKnmV3FJHRbEaknINhS4TM8HIU+qvYehQ9wpfh3wWKCyUnblGs8ijSaLP9bgz2aGihvEPfQlA2hgbvpP2xMDxcLisNyM3cXEQL3dltHUR2IX8vu+BiOvCIsdbAQ+/NDudAtdQ3qjDsR4m/J3dKB4tSAtTdtYekRo0AlUG8usESfSc526Aegqx/YAgwbrALjULs4as2tnloWb0IaeW2d0kQtn62ZUClCwsMSq9M=) 2026-03-28 00:21:13.990094 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNoGED84RFrZyDgtOVI39/5KTS0uo9oaDUd1POYR2iF1w3nM0zPN6X8qgf46gPIdAlrNgXxOiWjMii48LAhElV0=) 2026-03-28 00:21:13.990109 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKib87Vz4ORY5A+B4gVChnkjmvTBUKUcvTnHfQrx7wBn) 2026-03-28 00:21:13.990120 | orchestrator | 2026-03-28 00:21:13.990131 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-28 00:21:13.990142 | orchestrator | Saturday 28 March 2026 00:21:10 +0000 (0:00:01.077) 0:00:09.340 ******** 2026-03-28 00:21:13.990224 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDY/wVntE4gTWRUI3tmSqIp/08VsxUHuLtoGFvaD+xkxNhvEGlOWtFkcsaDgoSKEE5qZS9iaHdqRww9Dn1qAB9NLC9hDHzHXGEkKMKPEDpwX3xNGlFpE+8iLkvsnbE3qTv+2aa6ic8Gjg3aeRH2fzFd1L/206dXVoEQqb/zQel++XsFmpNetDjYFSqqltlNIHrLg5j5XxaC4zBGoybti0zCtS9aRLiExZ0b4SPgFuF2xNOo7Uc0DMvHU4kIh8yBW4YM36LHF8oyJnlpK2i9BxAMHwg+b2jv4BSMcu6Ge6HLLXIBjSjKo5njGM59d9vcqIrj6BUppgSzS9aQ2mBvO04SPO0W0D8TbT0OZ0jNdgU4urikGU+hfcx3ZTBNdg5rKIdeArJ3hFlzdNF0nQabRGArnge+jmgwaImVsvia51XyH0NaG04yJxxcDdmKUMw7fNhG/MuLsdYqMPrh36OIBUr3sry81igImckCs3+qqVrmcs2TL+kDEsk12InZ2FqlNq8=) 2026-03-28 00:21:13.990237 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI0wsKfXsL4b6JMeND1MzP3gOTR6zBtogIozMIGTlxV4W/qE1JK90lNF0xDVVvw8ulfw2ObjyQecUr4pMBb/skA=) 2026-03-28 00:21:13.990248 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOgEBg9BGQDZMcyLxEib+odRjicMZy1ebXK+Hs7bAAD2) 2026-03-28 00:21:13.990258 | orchestrator | 2026-03-28 00:21:13.990269 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-28 00:21:13.990280 | orchestrator | Saturday 28 March 2026 00:21:11 +0000 (0:00:01.133) 0:00:10.474 ******** 2026-03-28 00:21:13.990291 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDXQZJY7VxBpPLizfNCy7fY4Ptj4RWwokSOOUJQQRtGdK1UyGBWMCR2BrtKI1eTn7izXd7/gkm8XdmsGdSZ4HyPBBEZBoI8tSObSNHq19+mZ8O6tFDhhDWJ9nzXoCLN/2bIcp33MTnDB9Ge46lwBgexkeddYzDi4RIn36lkifnXjBaJoJBop8uQPFTOzH18jTsKy7y20yFwKRHgqDoYZIm1Nl9YFBlX05EZj96FKGkr6cWabVzJuWnJG5D5rIs8UdrZ1KCLEe8HekgYgVvc5NsWfWBHW3aEAtgr8wdzNe1yk/YNmmWv0FyuEFWF9varn737YnkfYoLF7Rnrzat3CyhvvVWb2WkZlGIUN3lmC+QCahHbObZiSq26CKDRMpuF6thrsqezo8l7zaEW7DGHCwRJ+adM29ZoPb/niX4rp/6r6T7I0ZMI0HWADck/HQu4a1nzCPZSHzReF7fww+kGxMiy9NfqSmcJOguKehw/VFdGESCmuZ0hwXDXG9DdipzqE40=) 2026-03-28 00:21:13.990302 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPga/rlQCProAWBOTMzxZ1veYPwkQvxyVrGTdyhB3rkXpJAORgUAuK6wKSOptOvmCldWyVgfKuDquTpzUx0HMVI=) 2026-03-28 00:21:13.990313 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBAttPMUi6GeHm7WLEvkjyLy6DqkwKq8Ffcq5HiMPsCQ) 2026-03-28 00:21:13.990324 | orchestrator | 2026-03-28 00:21:13.990335 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-28 00:21:13.990346 | orchestrator | Saturday 28 March 2026 00:21:12 +0000 (0:00:01.126) 0:00:11.600 ******** 2026-03-28 00:21:13.990357 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDCsSv2rrwfmU2isovO7+mvmCLioj1l7j3HfGG+/MZ0qmAV1eBkAqQuR6PJVOyJ1ebPvyEFyUWWY90f9MTBPJJeSSfpjGxJuEvG9bGfzHSFoIe4p8GbggX1Pvm/S5yAUYgLF6BgG96/K8oC5zrPdpJ7wfvV9IJp8iUgXFjJ7WAG+H7voBH+2RczwavN4Det38kPOFonLwXzPK2WEFNowZoRbg3TBGEg8QgLc65c1MWQK9zDtvW3fwbBQm3iFsSGifh++M02iaYMUT6i6w30kKOOYhQ8xAyZVAgX16ZQmux2vDaqQQr/KliwilBk05+uPesJFO8RHgRtrIlwUw1ohZU1GG7SESrBng0ifcqOYxteXIz/iVkbBwlLWDXyWEw72EcYtaWOK5nkrXrIXJ7pFuVyCeSHkbmCJ3DE9XpOhi+q3XYZlfwkUzGdVmOlD2HcqdS22Kl9MXvKKX2wTMe9C3oBuj9eoptEgtRUX2axgS7g/VOhvq2+X4qvWSr4fptQYX0=) 2026-03-28 00:21:13.990378 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILJOyisDqjYaIsgpLRqbu1N/ravXfN6+4dj0cD35r7/q) 2026-03-28 00:21:13.990389 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDPHNJMKM96JTQLtW/fvrEXtdqkyUPUdm+N53+T1rt/zCeP+WaBMK6hAaXWx+m/3l7sSJcYRGRJHFQ6OHwxmeZs=) 2026-03-28 00:21:13.990400 | orchestrator | 2026-03-28 00:21:13.990411 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-28 00:21:13.990422 | orchestrator | Saturday 28 March 2026 00:21:13 +0000 (0:00:01.134) 0:00:12.734 ******** 2026-03-28 00:21:13.990473 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHdr0pp0K+Y5JvBsZ03tamIN9g8Sa4yuJvV/uUFlStLIz8IXwK8fUyyvkAmLYb3zxSmkQZwgulhkBucbKdo0L9c=) 2026-03-28 00:21:25.598588 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLKElbU8juaYoyDl6VDIe6BRDOhEsfySnb3IWHMEIpQ7N3pJnbLE3PZ77nJGKdTkZ/r7bIO1GMkGyxCK+pPwA6h3RmNbgmP1HDUFzjRDQ5r/oHsEiG+a0bCvqWXc/qcOKdcyXo35AJ+JI2fw/MLfHryfeeuDGs74+QkzEYxl98l6/g2xPmWLH21t+P/TRkyjnvgFzSL8UZXyRc4OOf91QbeU6LIf/Zuy/K8C+2af0sdNxTfYfXkLRDJC0TOQlbQeTBUce7n43EUcrai91jUIrl0/avYePzHil7RkQg9gOb5PbdlkpglLBhrBkDxfnoae+aO3hU7RvtummtzrcJL8FrhPOY0WVm6Mer2v4m/Px9yVl6iLpHbopItEocDlz6tywhH42NFxgzy9OQWiTJWPni2x3JdbL3VPts5MD5/G35G6jyumTCfpx8kgozuOT42xXEbfoth8dud8hF49zDrHCfDry8tC9C5kog2hwU0hPI949omQ10oga4CDRGck31SSU=) 2026-03-28 00:21:25.598670 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJVPwxlWSNxHYe5yJrAgvMd8qEEU5FOmwJVlxJ5cFN8J) 2026-03-28 00:21:25.598680 | orchestrator | 2026-03-28 00:21:25.598687 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-28 00:21:25.598694 | orchestrator | Saturday 28 March 2026 00:21:14 +0000 (0:00:01.141) 0:00:13.876 ******** 2026-03-28 00:21:25.598700 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCVIdlizMcqf69+ZVfsmtrvQOgvBDxNacck8r7zQOdTOzAD8NEeajCRvGqLlcTisBpEDPvgDtQzS8xn7h2wwH7krAAEx6LoQtpWr61J87aPkE2AvS49Ev/2EcKeuHnpq7lpEl1CX+UrHLUP/vEfZpFwQmpcR7xiEBbCodljUf5PahpbEEardj4cUOhtTi8tuowgKKhjnvAvNiDNxMkk5tMJE88r2g6ye5vbMYFELMX49QfvXjKQCcOcszLEbsYlqB/5GXkIxYA3YnMLdj9f3qUoMKgBW9hgxNLsSA7O2zmDdTCWJAuxM7Dwq3EqsbnAl6QYH1qjqj/qnfTks/iVM58zZJ+g9xvfCThjUwl6ch7cyvaxo3ET33ASXiTLceaJ2bpT3PwtqYw23ZFdWqTQEacJH3y0NSFfv47tBoG4o91ucvWMoCxQVTDMRQfuFM2ki0xX+LwFaeyHCdSbtr14LCcv9xw/ZjbEhR79NfRi/ASbSn1iCtxbo1E9ah/4Na3vtMk=) 2026-03-28 00:21:25.598707 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHizS6yqgSnyQtNMEKXyBO8XyNQ8+0mK8+ubTfGwcmJOAykYdcFlnPaC0b509bmr5mrAzPE+h2eK75Fc97beCpw=) 2026-03-28 00:21:25.598714 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBZuRTECAcfaR/qBXo7VnM1fpQgRRy+QWbwuMFHzEIuS) 2026-03-28 00:21:25.598720 | orchestrator | 2026-03-28 00:21:25.598726 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-03-28 00:21:25.598747 | orchestrator | Saturday 28 March 2026 00:21:15 +0000 (0:00:01.100) 0:00:14.977 ******** 2026-03-28 00:21:25.598753 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-28 00:21:25.598759 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-28 00:21:25.598765 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-28 00:21:25.598786 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-28 00:21:25.598792 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-28 00:21:25.598798 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-28 00:21:25.598803 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-28 00:21:25.598809 | orchestrator | 2026-03-28 00:21:25.598814 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-03-28 00:21:25.598821 | orchestrator | Saturday 28 March 2026 00:21:21 +0000 (0:00:05.402) 0:00:20.380 ******** 2026-03-28 00:21:25.598827 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-28 00:21:25.598834 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-28 00:21:25.598840 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-28 00:21:25.598845 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-28 00:21:25.598851 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-28 00:21:25.598856 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-28 00:21:25.598862 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-28 00:21:25.598867 | orchestrator | 2026-03-28 00:21:25.598884 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-28 00:21:25.598890 | orchestrator | Saturday 28 March 2026 00:21:21 +0000 (0:00:00.172) 0:00:20.552 ******** 2026-03-28 00:21:25.598895 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNoq/nFATrHDuW52p9E7mc5fBJqpp4XOkBY0Qgl6i+CYoGzjGK2nHLXEkNRm8YAzGBxCHw4rSgXuSYLacN/Ga/w=) 2026-03-28 00:21:25.598902 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCZuY9ST4pUn9IVm8X4MdTI1/88+bsCTOXG0teRQFkE6iksZaszj6w33W8LGbeSg3C6oH8k7WOo/dTqdWAPuuBgnaVYoaPjlOCRYRHptxID5MGNo8BuRZwhsdUUF27aakSNFPGnIStjCrnnwQZC51NWe571u3OLw17ZhaFuI3JLFyIGySajn1JX+GHiIX8wqXUupm2iGwopKqzrgvrkbbJkxzIUHRRKzNlNSW22l2NFNSfkpDI+LXLvk7GEIg6FzqwrxY4jfisr87DFJRFl5Posk+3j8HSG+3m6PthBYoCrh4xsNIQOelo4hj8nuRn/EDci2MT2haabeFXcZvSx+HatOwF7dzQkicU6BQmdIprrCll2oCWkvsBXVOZuIO0r2VhF4B0KRYod6tRaqU8fFEcRuGxiGUm8Ms0ESEdVMhPtKOBQkbezM57SHWQBi/davQpV7oFZrOdZCAqf/tNqpnr0CBCC4vcd6hogZoxZQEA6pwCrUe6/ynAWLGVvWiRQLAE=) 2026-03-28 00:21:25.598908 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINZtXRf2Eot6ibT7DoBpA4AavU/FsfrtJgWkiqvLGVpZ) 2026-03-28 00:21:25.598913 | orchestrator | 2026-03-28 00:21:25.598919 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-28 00:21:25.598924 | orchestrator | Saturday 28 March 2026 00:21:22 +0000 (0:00:00.961) 0:00:21.514 ******** 2026-03-28 00:21:25.598930 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKib87Vz4ORY5A+B4gVChnkjmvTBUKUcvTnHfQrx7wBn) 2026-03-28 00:21:25.598935 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCrug5LXWaZ3peEEpvx+Ldt27SPshmSeCGm621hBZOHbP6hCtd53OjcPLmzR6cNO8W5FW+i+5UN3SEXUDPJrFUcE7dsfKM4nJDagh7Eg3k4e1ECBhVgURa85YYNvzfI+/n9Ug7f5uRDNGRtt3G+2Z0NVxbcKtEqwoNjWqs8u6rUa/Fq57of+UaHTkuO8D2fqFO/d4Anv/DE0a9KMqkRtA6EtU2mYNCcPjq4Uo4wYs7if9NyTmUBLGZiDxjXhPfMNNizoNOF5cWBfIGT84npVtG5p3L3ujKnmV3FJHRbEaknINhS4TM8HIU+qvYehQ9wpfh3wWKCyUnblGs8ijSaLP9bgz2aGihvEPfQlA2hgbvpP2xMDxcLisNyM3cXEQL3dltHUR2IX8vu+BiOvCIsdbAQ+/NDudAtdQ3qjDsR4m/J3dKB4tSAtTdtYekRo0AlUG8usESfSc526Aegqx/YAgwbrALjULs4as2tnloWb0IaeW2d0kQtn62ZUClCwsMSq9M=) 2026-03-28 00:21:25.598946 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNoGED84RFrZyDgtOVI39/5KTS0uo9oaDUd1POYR2iF1w3nM0zPN6X8qgf46gPIdAlrNgXxOiWjMii48LAhElV0=) 2026-03-28 00:21:25.598951 | orchestrator | 2026-03-28 00:21:25.598957 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-28 00:21:25.598963 | orchestrator | Saturday 28 March 2026 00:21:23 +0000 (0:00:01.027) 0:00:22.541 ******** 2026-03-28 00:21:25.598969 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOgEBg9BGQDZMcyLxEib+odRjicMZy1ebXK+Hs7bAAD2) 2026-03-28 00:21:25.598974 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDY/wVntE4gTWRUI3tmSqIp/08VsxUHuLtoGFvaD+xkxNhvEGlOWtFkcsaDgoSKEE5qZS9iaHdqRww9Dn1qAB9NLC9hDHzHXGEkKMKPEDpwX3xNGlFpE+8iLkvsnbE3qTv+2aa6ic8Gjg3aeRH2fzFd1L/206dXVoEQqb/zQel++XsFmpNetDjYFSqqltlNIHrLg5j5XxaC4zBGoybti0zCtS9aRLiExZ0b4SPgFuF2xNOo7Uc0DMvHU4kIh8yBW4YM36LHF8oyJnlpK2i9BxAMHwg+b2jv4BSMcu6Ge6HLLXIBjSjKo5njGM59d9vcqIrj6BUppgSzS9aQ2mBvO04SPO0W0D8TbT0OZ0jNdgU4urikGU+hfcx3ZTBNdg5rKIdeArJ3hFlzdNF0nQabRGArnge+jmgwaImVsvia51XyH0NaG04yJxxcDdmKUMw7fNhG/MuLsdYqMPrh36OIBUr3sry81igImckCs3+qqVrmcs2TL+kDEsk12InZ2FqlNq8=) 2026-03-28 00:21:25.598980 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI0wsKfXsL4b6JMeND1MzP3gOTR6zBtogIozMIGTlxV4W/qE1JK90lNF0xDVVvw8ulfw2ObjyQecUr4pMBb/skA=) 2026-03-28 00:21:25.598986 | orchestrator | 2026-03-28 00:21:25.598991 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-28 00:21:25.598997 | orchestrator | Saturday 28 March 2026 00:21:24 +0000 (0:00:01.101) 0:00:23.642 ******** 2026-03-28 00:21:25.599006 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPga/rlQCProAWBOTMzxZ1veYPwkQvxyVrGTdyhB3rkXpJAORgUAuK6wKSOptOvmCldWyVgfKuDquTpzUx0HMVI=) 2026-03-28 00:21:25.599012 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBAttPMUi6GeHm7WLEvkjyLy6DqkwKq8Ffcq5HiMPsCQ) 2026-03-28 00:21:25.599028 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDXQZJY7VxBpPLizfNCy7fY4Ptj4RWwokSOOUJQQRtGdK1UyGBWMCR2BrtKI1eTn7izXd7/gkm8XdmsGdSZ4HyPBBEZBoI8tSObSNHq19+mZ8O6tFDhhDWJ9nzXoCLN/2bIcp33MTnDB9Ge46lwBgexkeddYzDi4RIn36lkifnXjBaJoJBop8uQPFTOzH18jTsKy7y20yFwKRHgqDoYZIm1Nl9YFBlX05EZj96FKGkr6cWabVzJuWnJG5D5rIs8UdrZ1KCLEe8HekgYgVvc5NsWfWBHW3aEAtgr8wdzNe1yk/YNmmWv0FyuEFWF9varn737YnkfYoLF7Rnrzat3CyhvvVWb2WkZlGIUN3lmC+QCahHbObZiSq26CKDRMpuF6thrsqezo8l7zaEW7DGHCwRJ+adM29ZoPb/niX4rp/6r6T7I0ZMI0HWADck/HQu4a1nzCPZSHzReF7fww+kGxMiy9NfqSmcJOguKehw/VFdGESCmuZ0hwXDXG9DdipzqE40=) 2026-03-28 00:21:29.975876 | orchestrator | 2026-03-28 00:21:29.975971 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-28 00:21:29.975987 | orchestrator | Saturday 28 March 2026 00:21:25 +0000 (0:00:01.118) 0:00:24.760 ******** 2026-03-28 00:21:29.976015 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDCsSv2rrwfmU2isovO7+mvmCLioj1l7j3HfGG+/MZ0qmAV1eBkAqQuR6PJVOyJ1ebPvyEFyUWWY90f9MTBPJJeSSfpjGxJuEvG9bGfzHSFoIe4p8GbggX1Pvm/S5yAUYgLF6BgG96/K8oC5zrPdpJ7wfvV9IJp8iUgXFjJ7WAG+H7voBH+2RczwavN4Det38kPOFonLwXzPK2WEFNowZoRbg3TBGEg8QgLc65c1MWQK9zDtvW3fwbBQm3iFsSGifh++M02iaYMUT6i6w30kKOOYhQ8xAyZVAgX16ZQmux2vDaqQQr/KliwilBk05+uPesJFO8RHgRtrIlwUw1ohZU1GG7SESrBng0ifcqOYxteXIz/iVkbBwlLWDXyWEw72EcYtaWOK5nkrXrIXJ7pFuVyCeSHkbmCJ3DE9XpOhi+q3XYZlfwkUzGdVmOlD2HcqdS22Kl9MXvKKX2wTMe9C3oBuj9eoptEgtRUX2axgS7g/VOhvq2+X4qvWSr4fptQYX0=) 2026-03-28 00:21:29.976052 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDPHNJMKM96JTQLtW/fvrEXtdqkyUPUdm+N53+T1rt/zCeP+WaBMK6hAaXWx+m/3l7sSJcYRGRJHFQ6OHwxmeZs=) 2026-03-28 00:21:29.976066 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILJOyisDqjYaIsgpLRqbu1N/ravXfN6+4dj0cD35r7/q) 2026-03-28 00:21:29.976078 | orchestrator | 2026-03-28 00:21:29.976089 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-28 00:21:29.976099 | orchestrator | Saturday 28 March 2026 00:21:26 +0000 (0:00:01.180) 0:00:25.940 ******** 2026-03-28 00:21:29.976110 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHdr0pp0K+Y5JvBsZ03tamIN9g8Sa4yuJvV/uUFlStLIz8IXwK8fUyyvkAmLYb3zxSmkQZwgulhkBucbKdo0L9c=) 2026-03-28 00:21:29.976122 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLKElbU8juaYoyDl6VDIe6BRDOhEsfySnb3IWHMEIpQ7N3pJnbLE3PZ77nJGKdTkZ/r7bIO1GMkGyxCK+pPwA6h3RmNbgmP1HDUFzjRDQ5r/oHsEiG+a0bCvqWXc/qcOKdcyXo35AJ+JI2fw/MLfHryfeeuDGs74+QkzEYxl98l6/g2xPmWLH21t+P/TRkyjnvgFzSL8UZXyRc4OOf91QbeU6LIf/Zuy/K8C+2af0sdNxTfYfXkLRDJC0TOQlbQeTBUce7n43EUcrai91jUIrl0/avYePzHil7RkQg9gOb5PbdlkpglLBhrBkDxfnoae+aO3hU7RvtummtzrcJL8FrhPOY0WVm6Mer2v4m/Px9yVl6iLpHbopItEocDlz6tywhH42NFxgzy9OQWiTJWPni2x3JdbL3VPts5MD5/G35G6jyumTCfpx8kgozuOT42xXEbfoth8dud8hF49zDrHCfDry8tC9C5kog2hwU0hPI949omQ10oga4CDRGck31SSU=) 2026-03-28 00:21:29.976134 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJVPwxlWSNxHYe5yJrAgvMd8qEEU5FOmwJVlxJ5cFN8J) 2026-03-28 00:21:29.976145 | orchestrator | 2026-03-28 00:21:29.976156 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-28 00:21:29.976167 | orchestrator | Saturday 28 March 2026 00:21:27 +0000 (0:00:01.112) 0:00:27.053 ******** 2026-03-28 00:21:29.976177 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBZuRTECAcfaR/qBXo7VnM1fpQgRRy+QWbwuMFHzEIuS) 2026-03-28 00:21:29.976189 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCVIdlizMcqf69+ZVfsmtrvQOgvBDxNacck8r7zQOdTOzAD8NEeajCRvGqLlcTisBpEDPvgDtQzS8xn7h2wwH7krAAEx6LoQtpWr61J87aPkE2AvS49Ev/2EcKeuHnpq7lpEl1CX+UrHLUP/vEfZpFwQmpcR7xiEBbCodljUf5PahpbEEardj4cUOhtTi8tuowgKKhjnvAvNiDNxMkk5tMJE88r2g6ye5vbMYFELMX49QfvXjKQCcOcszLEbsYlqB/5GXkIxYA3YnMLdj9f3qUoMKgBW9hgxNLsSA7O2zmDdTCWJAuxM7Dwq3EqsbnAl6QYH1qjqj/qnfTks/iVM58zZJ+g9xvfCThjUwl6ch7cyvaxo3ET33ASXiTLceaJ2bpT3PwtqYw23ZFdWqTQEacJH3y0NSFfv47tBoG4o91ucvWMoCxQVTDMRQfuFM2ki0xX+LwFaeyHCdSbtr14LCcv9xw/ZjbEhR79NfRi/ASbSn1iCtxbo1E9ah/4Na3vtMk=) 2026-03-28 00:21:29.976200 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHizS6yqgSnyQtNMEKXyBO8XyNQ8+0mK8+ubTfGwcmJOAykYdcFlnPaC0b509bmr5mrAzPE+h2eK75Fc97beCpw=) 2026-03-28 00:21:29.976211 | orchestrator | 2026-03-28 00:21:29.976222 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-03-28 00:21:29.976233 | orchestrator | Saturday 28 March 2026 00:21:29 +0000 (0:00:01.090) 0:00:28.143 ******** 2026-03-28 00:21:29.976244 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-28 00:21:29.976255 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-28 00:21:29.976265 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-28 00:21:29.976276 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-28 00:21:29.976286 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-28 00:21:29.976297 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-28 00:21:29.976308 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-28 00:21:29.976326 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:21:29.976337 | orchestrator | 2026-03-28 00:21:29.976363 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-03-28 00:21:29.976375 | orchestrator | Saturday 28 March 2026 00:21:29 +0000 (0:00:00.240) 0:00:28.384 ******** 2026-03-28 00:21:29.976386 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:21:29.976397 | orchestrator | 2026-03-28 00:21:29.976407 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-03-28 00:21:29.976418 | orchestrator | Saturday 28 March 2026 00:21:29 +0000 (0:00:00.055) 0:00:28.439 ******** 2026-03-28 00:21:29.976501 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:21:29.976521 | orchestrator | 2026-03-28 00:21:29.976540 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-03-28 00:21:29.976566 | orchestrator | Saturday 28 March 2026 00:21:29 +0000 (0:00:00.054) 0:00:28.494 ******** 2026-03-28 00:21:29.976588 | orchestrator | changed: [testbed-manager] 2026-03-28 00:21:29.976607 | orchestrator | 2026-03-28 00:21:29.976626 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:21:29.976646 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-28 00:21:29.976666 | orchestrator | 2026-03-28 00:21:29.976685 | orchestrator | 2026-03-28 00:21:29.976700 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:21:29.976711 | orchestrator | Saturday 28 March 2026 00:21:29 +0000 (0:00:00.480) 0:00:28.974 ******** 2026-03-28 00:21:29.976729 | orchestrator | =============================================================================== 2026-03-28 00:21:29.976747 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.57s 2026-03-28 00:21:29.976765 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.40s 2026-03-28 00:21:29.976783 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.33s 2026-03-28 00:21:29.976800 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.18s 2026-03-28 00:21:29.976818 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2026-03-28 00:21:29.976837 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2026-03-28 00:21:29.976868 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2026-03-28 00:21:29.976888 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2026-03-28 00:21:29.976905 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-03-28 00:21:29.976923 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2026-03-28 00:21:29.976941 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2026-03-28 00:21:29.976960 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2026-03-28 00:21:29.976978 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2026-03-28 00:21:29.976998 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2026-03-28 00:21:29.977016 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-03-28 00:21:29.977035 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.96s 2026-03-28 00:21:29.977053 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.48s 2026-03-28 00:21:29.977070 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.24s 2026-03-28 00:21:29.977089 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.17s 2026-03-28 00:21:29.977107 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.17s 2026-03-28 00:21:30.109810 | orchestrator | + osism apply squid 2026-03-28 00:21:41.290664 | orchestrator | 2026-03-28 00:21:41 | INFO  | Prepare task for execution of squid. 2026-03-28 00:21:41.356504 | orchestrator | 2026-03-28 00:21:41 | INFO  | Task b1160976-43b4-4a30-ada5-2860a47f64a6 (squid) was prepared for execution. 2026-03-28 00:21:41.356589 | orchestrator | 2026-03-28 00:21:41 | INFO  | It takes a moment until task b1160976-43b4-4a30-ada5-2860a47f64a6 (squid) has been started and output is visible here. 2026-03-28 00:23:36.896142 | orchestrator | 2026-03-28 00:23:36.896259 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-03-28 00:23:36.896276 | orchestrator | 2026-03-28 00:23:36.896288 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-03-28 00:23:36.896299 | orchestrator | Saturday 28 March 2026 00:21:44 +0000 (0:00:00.204) 0:00:00.204 ******** 2026-03-28 00:23:36.896380 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-03-28 00:23:36.896403 | orchestrator | 2026-03-28 00:23:36.896420 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-03-28 00:23:36.896432 | orchestrator | Saturday 28 March 2026 00:21:44 +0000 (0:00:00.106) 0:00:00.310 ******** 2026-03-28 00:23:36.896444 | orchestrator | ok: [testbed-manager] 2026-03-28 00:23:36.896456 | orchestrator | 2026-03-28 00:23:36.896467 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-03-28 00:23:36.896478 | orchestrator | Saturday 28 March 2026 00:21:47 +0000 (0:00:02.564) 0:00:02.875 ******** 2026-03-28 00:23:36.896489 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-03-28 00:23:36.896500 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-03-28 00:23:36.896511 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-03-28 00:23:36.896522 | orchestrator | 2026-03-28 00:23:36.896533 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-03-28 00:23:36.896544 | orchestrator | Saturday 28 March 2026 00:21:48 +0000 (0:00:01.350) 0:00:04.225 ******** 2026-03-28 00:23:36.896561 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-03-28 00:23:36.896579 | orchestrator | 2026-03-28 00:23:36.896599 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-03-28 00:23:36.896619 | orchestrator | Saturday 28 March 2026 00:21:49 +0000 (0:00:01.125) 0:00:05.351 ******** 2026-03-28 00:23:36.896632 | orchestrator | ok: [testbed-manager] 2026-03-28 00:23:36.896643 | orchestrator | 2026-03-28 00:23:36.896673 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-03-28 00:23:36.896687 | orchestrator | Saturday 28 March 2026 00:21:50 +0000 (0:00:00.359) 0:00:05.710 ******** 2026-03-28 00:23:36.896699 | orchestrator | changed: [testbed-manager] 2026-03-28 00:23:36.896711 | orchestrator | 2026-03-28 00:23:36.896724 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-03-28 00:23:36.896737 | orchestrator | Saturday 28 March 2026 00:21:51 +0000 (0:00:00.946) 0:00:06.657 ******** 2026-03-28 00:23:36.896749 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-03-28 00:23:36.896763 | orchestrator | ok: [testbed-manager] 2026-03-28 00:23:36.896775 | orchestrator | 2026-03-28 00:23:36.896787 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-03-28 00:23:36.896800 | orchestrator | Saturday 28 March 2026 00:22:23 +0000 (0:00:32.409) 0:00:39.066 ******** 2026-03-28 00:23:36.896810 | orchestrator | changed: [testbed-manager] 2026-03-28 00:23:36.896821 | orchestrator | 2026-03-28 00:23:36.896832 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-03-28 00:23:36.896843 | orchestrator | Saturday 28 March 2026 00:22:35 +0000 (0:00:12.353) 0:00:51.419 ******** 2026-03-28 00:23:36.896854 | orchestrator | Pausing for 60 seconds 2026-03-28 00:23:36.896864 | orchestrator | changed: [testbed-manager] 2026-03-28 00:23:36.896875 | orchestrator | 2026-03-28 00:23:36.896886 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-03-28 00:23:36.896922 | orchestrator | Saturday 28 March 2026 00:23:35 +0000 (0:01:00.094) 0:01:51.514 ******** 2026-03-28 00:23:36.896933 | orchestrator | ok: [testbed-manager] 2026-03-28 00:23:36.896944 | orchestrator | 2026-03-28 00:23:36.896955 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-03-28 00:23:36.896966 | orchestrator | Saturday 28 March 2026 00:23:36 +0000 (0:00:00.072) 0:01:51.586 ******** 2026-03-28 00:23:36.896976 | orchestrator | changed: [testbed-manager] 2026-03-28 00:23:36.896987 | orchestrator | 2026-03-28 00:23:36.896997 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:23:36.897008 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:23:36.897019 | orchestrator | 2026-03-28 00:23:36.897030 | orchestrator | 2026-03-28 00:23:36.897040 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:23:36.897051 | orchestrator | Saturday 28 March 2026 00:23:36 +0000 (0:00:00.638) 0:01:52.225 ******** 2026-03-28 00:23:36.897061 | orchestrator | =============================================================================== 2026-03-28 00:23:36.897072 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.09s 2026-03-28 00:23:36.897082 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 32.41s 2026-03-28 00:23:36.897093 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.35s 2026-03-28 00:23:36.897103 | orchestrator | osism.services.squid : Install required packages ------------------------ 2.56s 2026-03-28 00:23:36.897114 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.35s 2026-03-28 00:23:36.897124 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.13s 2026-03-28 00:23:36.897134 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.95s 2026-03-28 00:23:36.897145 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.64s 2026-03-28 00:23:36.897155 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.36s 2026-03-28 00:23:36.897166 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.11s 2026-03-28 00:23:36.897176 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2026-03-28 00:23:37.086588 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-28 00:23:37.086677 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla 2026-03-28 00:23:37.091851 | orchestrator | + set -e 2026-03-28 00:23:37.091912 | orchestrator | + NAMESPACE=kolla 2026-03-28 00:23:37.091926 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-03-28 00:23:37.096880 | orchestrator | ++ semver latest 9.0.0 2026-03-28 00:23:37.156745 | orchestrator | + [[ -1 -lt 0 ]] 2026-03-28 00:23:37.156820 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-28 00:23:37.158385 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-03-28 00:23:48.613673 | orchestrator | 2026-03-28 00:23:48 | INFO  | Prepare task for execution of operator. 2026-03-28 00:23:48.696017 | orchestrator | 2026-03-28 00:23:48 | INFO  | Task 8300172d-6ce4-460c-a8fa-6f8e24b5e484 (operator) was prepared for execution. 2026-03-28 00:23:48.696102 | orchestrator | 2026-03-28 00:23:48 | INFO  | It takes a moment until task 8300172d-6ce4-460c-a8fa-6f8e24b5e484 (operator) has been started and output is visible here. 2026-03-28 00:24:03.823932 | orchestrator | 2026-03-28 00:24:03.824029 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-03-28 00:24:03.824041 | orchestrator | 2026-03-28 00:24:03.824049 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-28 00:24:03.824057 | orchestrator | Saturday 28 March 2026 00:23:51 +0000 (0:00:00.202) 0:00:00.202 ******** 2026-03-28 00:24:03.824065 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:24:03.824073 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:24:03.824081 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:24:03.824115 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:24:03.824122 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:24:03.824129 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:24:03.824136 | orchestrator | 2026-03-28 00:24:03.824143 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-03-28 00:24:03.824150 | orchestrator | Saturday 28 March 2026 00:23:55 +0000 (0:00:03.398) 0:00:03.601 ******** 2026-03-28 00:24:03.824157 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:24:03.824163 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:24:03.824170 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:24:03.824177 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:24:03.824184 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:24:03.824191 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:24:03.824197 | orchestrator | 2026-03-28 00:24:03.824204 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-03-28 00:24:03.824211 | orchestrator | 2026-03-28 00:24:03.824232 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-28 00:24:03.824239 | orchestrator | Saturday 28 March 2026 00:23:56 +0000 (0:00:00.818) 0:00:04.420 ******** 2026-03-28 00:24:03.824246 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:24:03.824252 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:24:03.824259 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:24:03.824265 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:24:03.824272 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:24:03.824279 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:24:03.824286 | orchestrator | 2026-03-28 00:24:03.824341 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-28 00:24:03.824348 | orchestrator | Saturday 28 March 2026 00:23:56 +0000 (0:00:00.168) 0:00:04.588 ******** 2026-03-28 00:24:03.824355 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:24:03.824362 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:24:03.824369 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:24:03.824375 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:24:03.824382 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:24:03.824389 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:24:03.824396 | orchestrator | 2026-03-28 00:24:03.824403 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-28 00:24:03.824410 | orchestrator | Saturday 28 March 2026 00:23:56 +0000 (0:00:00.181) 0:00:04.770 ******** 2026-03-28 00:24:03.824417 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:24:03.824425 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:24:03.824431 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:24:03.824438 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:24:03.824445 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:24:03.824452 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:24:03.824458 | orchestrator | 2026-03-28 00:24:03.824465 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-28 00:24:03.824472 | orchestrator | Saturday 28 March 2026 00:23:57 +0000 (0:00:00.693) 0:00:05.463 ******** 2026-03-28 00:24:03.824479 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:24:03.824486 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:24:03.824494 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:24:03.824501 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:24:03.824508 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:24:03.824515 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:24:03.824522 | orchestrator | 2026-03-28 00:24:03.824529 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-28 00:24:03.824536 | orchestrator | Saturday 28 March 2026 00:23:58 +0000 (0:00:00.898) 0:00:06.362 ******** 2026-03-28 00:24:03.824544 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-03-28 00:24:03.824552 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-03-28 00:24:03.824559 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-03-28 00:24:03.824566 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-03-28 00:24:03.824573 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-03-28 00:24:03.824587 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-03-28 00:24:03.824594 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-03-28 00:24:03.824601 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-03-28 00:24:03.824608 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-03-28 00:24:03.824615 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-03-28 00:24:03.824622 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-03-28 00:24:03.824630 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-03-28 00:24:03.824637 | orchestrator | 2026-03-28 00:24:03.824644 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-28 00:24:03.824651 | orchestrator | Saturday 28 March 2026 00:23:59 +0000 (0:00:01.154) 0:00:07.516 ******** 2026-03-28 00:24:03.824659 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:24:03.824666 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:24:03.824673 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:24:03.824680 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:24:03.824687 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:24:03.824694 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:24:03.824701 | orchestrator | 2026-03-28 00:24:03.824708 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-28 00:24:03.824716 | orchestrator | Saturday 28 March 2026 00:24:00 +0000 (0:00:01.293) 0:00:08.810 ******** 2026-03-28 00:24:03.824723 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-03-28 00:24:03.824731 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-03-28 00:24:03.824738 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-03-28 00:24:03.824745 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-03-28 00:24:03.824752 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-03-28 00:24:03.824772 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-03-28 00:24:03.824779 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-03-28 00:24:03.824786 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-03-28 00:24:03.824793 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-03-28 00:24:03.824799 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-03-28 00:24:03.824806 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-03-28 00:24:03.824813 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-03-28 00:24:03.824819 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-03-28 00:24:03.824826 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-03-28 00:24:03.824833 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-03-28 00:24:03.824843 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-03-28 00:24:03.824850 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-03-28 00:24:03.824857 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-03-28 00:24:03.824864 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-03-28 00:24:03.824870 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-03-28 00:24:03.824877 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-03-28 00:24:03.824884 | orchestrator | 2026-03-28 00:24:03.824890 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-28 00:24:03.824898 | orchestrator | Saturday 28 March 2026 00:24:01 +0000 (0:00:01.215) 0:00:10.025 ******** 2026-03-28 00:24:03.824904 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:24:03.824911 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:24:03.824918 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:24:03.824929 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:24:03.824936 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:24:03.824943 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:24:03.824950 | orchestrator | 2026-03-28 00:24:03.824956 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-28 00:24:03.824963 | orchestrator | Saturday 28 March 2026 00:24:01 +0000 (0:00:00.147) 0:00:10.173 ******** 2026-03-28 00:24:03.824970 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:24:03.824976 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:24:03.824983 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:24:03.824990 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:24:03.824996 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:24:03.825017 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:24:03.825032 | orchestrator | 2026-03-28 00:24:03.825039 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-28 00:24:03.825046 | orchestrator | Saturday 28 March 2026 00:24:02 +0000 (0:00:00.173) 0:00:10.347 ******** 2026-03-28 00:24:03.825053 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:24:03.825059 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:24:03.825066 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:24:03.825073 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:24:03.825080 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:24:03.825086 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:24:03.825093 | orchestrator | 2026-03-28 00:24:03.825100 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-28 00:24:03.825107 | orchestrator | Saturday 28 March 2026 00:24:02 +0000 (0:00:00.538) 0:00:10.886 ******** 2026-03-28 00:24:03.825114 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:24:03.825121 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:24:03.825127 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:24:03.825134 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:24:03.825141 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:24:03.825148 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:24:03.825154 | orchestrator | 2026-03-28 00:24:03.825161 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-28 00:24:03.825168 | orchestrator | Saturday 28 March 2026 00:24:02 +0000 (0:00:00.182) 0:00:11.068 ******** 2026-03-28 00:24:03.825175 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-28 00:24:03.825182 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-28 00:24:03.825188 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:24:03.825195 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:24:03.825202 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-28 00:24:03.825209 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:24:03.825215 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-28 00:24:03.825222 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-28 00:24:03.825229 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:24:03.825244 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:24:03.825250 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-28 00:24:03.825264 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:24:03.825271 | orchestrator | 2026-03-28 00:24:03.825278 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-28 00:24:03.825285 | orchestrator | Saturday 28 March 2026 00:24:03 +0000 (0:00:00.696) 0:00:11.764 ******** 2026-03-28 00:24:03.825309 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:24:03.825316 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:24:03.825322 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:24:03.825329 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:24:03.825336 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:24:03.825343 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:24:03.825349 | orchestrator | 2026-03-28 00:24:03.825356 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-28 00:24:03.825363 | orchestrator | Saturday 28 March 2026 00:24:03 +0000 (0:00:00.150) 0:00:11.915 ******** 2026-03-28 00:24:03.825374 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:24:03.825380 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:24:03.825387 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:24:03.825394 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:24:03.825405 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:24:05.085865 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:24:05.085988 | orchestrator | 2026-03-28 00:24:05.086011 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-28 00:24:05.086107 | orchestrator | Saturday 28 March 2026 00:24:03 +0000 (0:00:00.164) 0:00:12.079 ******** 2026-03-28 00:24:05.086126 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:24:05.086141 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:24:05.086156 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:24:05.086172 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:24:05.086188 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:24:05.086205 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:24:05.086222 | orchestrator | 2026-03-28 00:24:05.086239 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-28 00:24:05.086257 | orchestrator | Saturday 28 March 2026 00:24:03 +0000 (0:00:00.152) 0:00:12.232 ******** 2026-03-28 00:24:05.086275 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:24:05.086374 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:24:05.086394 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:24:05.086411 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:24:05.086428 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:24:05.086445 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:24:05.086463 | orchestrator | 2026-03-28 00:24:05.086479 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-28 00:24:05.086524 | orchestrator | Saturday 28 March 2026 00:24:04 +0000 (0:00:00.633) 0:00:12.865 ******** 2026-03-28 00:24:05.086544 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:24:05.086560 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:24:05.086578 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:24:05.086594 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:24:05.086610 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:24:05.086627 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:24:05.086643 | orchestrator | 2026-03-28 00:24:05.086660 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:24:05.086678 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-28 00:24:05.086698 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-28 00:24:05.086715 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-28 00:24:05.086732 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-28 00:24:05.086748 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-28 00:24:05.086764 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-28 00:24:05.086781 | orchestrator | 2026-03-28 00:24:05.086797 | orchestrator | 2026-03-28 00:24:05.086814 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:24:05.086831 | orchestrator | Saturday 28 March 2026 00:24:04 +0000 (0:00:00.253) 0:00:13.118 ******** 2026-03-28 00:24:05.086847 | orchestrator | =============================================================================== 2026-03-28 00:24:05.086893 | orchestrator | Gathering Facts --------------------------------------------------------- 3.40s 2026-03-28 00:24:05.086910 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.29s 2026-03-28 00:24:05.086926 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.22s 2026-03-28 00:24:05.086944 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.15s 2026-03-28 00:24:05.086961 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.90s 2026-03-28 00:24:05.086976 | orchestrator | Do not require tty for all users ---------------------------------------- 0.82s 2026-03-28 00:24:05.086993 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.70s 2026-03-28 00:24:05.087009 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.69s 2026-03-28 00:24:05.087026 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.63s 2026-03-28 00:24:05.087042 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.54s 2026-03-28 00:24:05.087058 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.25s 2026-03-28 00:24:05.087075 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.18s 2026-03-28 00:24:05.087092 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.18s 2026-03-28 00:24:05.087109 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.17s 2026-03-28 00:24:05.087125 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.17s 2026-03-28 00:24:05.087140 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.16s 2026-03-28 00:24:05.087156 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.15s 2026-03-28 00:24:05.087171 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.15s 2026-03-28 00:24:05.087188 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.15s 2026-03-28 00:24:05.277918 | orchestrator | + osism apply --environment custom facts 2026-03-28 00:24:06.538079 | orchestrator | 2026-03-28 00:24:06 | INFO  | Trying to run play facts in environment custom 2026-03-28 00:24:16.665593 | orchestrator | 2026-03-28 00:24:16 | INFO  | Prepare task for execution of facts. 2026-03-28 00:24:16.756612 | orchestrator | 2026-03-28 00:24:16 | INFO  | Task c2567f26-2392-455c-bcbf-a1a7c99da37d (facts) was prepared for execution. 2026-03-28 00:24:16.756736 | orchestrator | 2026-03-28 00:24:16 | INFO  | It takes a moment until task c2567f26-2392-455c-bcbf-a1a7c99da37d (facts) has been started and output is visible here. 2026-03-28 00:24:57.742636 | orchestrator | 2026-03-28 00:24:57.742726 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-03-28 00:24:57.742735 | orchestrator | 2026-03-28 00:24:57.742751 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-28 00:24:57.742758 | orchestrator | Saturday 28 March 2026 00:24:19 +0000 (0:00:00.117) 0:00:00.117 ******** 2026-03-28 00:24:57.742763 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:24:57.742769 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:24:57.742774 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:24:57.742780 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:24:57.742785 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:24:57.742790 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:24:57.742795 | orchestrator | ok: [testbed-manager] 2026-03-28 00:24:57.742801 | orchestrator | 2026-03-28 00:24:57.742806 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-03-28 00:24:57.742811 | orchestrator | Saturday 28 March 2026 00:24:21 +0000 (0:00:01.444) 0:00:01.562 ******** 2026-03-28 00:24:57.742816 | orchestrator | ok: [testbed-manager] 2026-03-28 00:24:57.742822 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:24:57.742827 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:24:57.742846 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:24:57.742851 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:24:57.742856 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:24:57.742861 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:24:57.742866 | orchestrator | 2026-03-28 00:24:57.742871 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-03-28 00:24:57.742876 | orchestrator | 2026-03-28 00:24:57.742881 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-28 00:24:57.742886 | orchestrator | Saturday 28 March 2026 00:24:22 +0000 (0:00:01.219) 0:00:02.782 ******** 2026-03-28 00:24:57.742891 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:24:57.742896 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:24:57.742901 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:24:57.742906 | orchestrator | 2026-03-28 00:24:57.742911 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-28 00:24:57.742917 | orchestrator | Saturday 28 March 2026 00:24:22 +0000 (0:00:00.109) 0:00:02.892 ******** 2026-03-28 00:24:57.742922 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:24:57.742927 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:24:57.742932 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:24:57.742937 | orchestrator | 2026-03-28 00:24:57.742942 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-28 00:24:57.742947 | orchestrator | Saturday 28 March 2026 00:24:22 +0000 (0:00:00.248) 0:00:03.140 ******** 2026-03-28 00:24:57.742952 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:24:57.742957 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:24:57.742961 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:24:57.742966 | orchestrator | 2026-03-28 00:24:57.742971 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-28 00:24:57.742977 | orchestrator | Saturday 28 March 2026 00:24:23 +0000 (0:00:00.210) 0:00:03.351 ******** 2026-03-28 00:24:57.742983 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:24:57.742989 | orchestrator | 2026-03-28 00:24:57.742994 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-28 00:24:57.742999 | orchestrator | Saturday 28 March 2026 00:24:23 +0000 (0:00:00.157) 0:00:03.508 ******** 2026-03-28 00:24:57.743004 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:24:57.743009 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:24:57.743014 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:24:57.743019 | orchestrator | 2026-03-28 00:24:57.743024 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-28 00:24:57.743029 | orchestrator | Saturday 28 March 2026 00:24:23 +0000 (0:00:00.435) 0:00:03.944 ******** 2026-03-28 00:24:57.743034 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:24:57.743039 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:24:57.743044 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:24:57.743049 | orchestrator | 2026-03-28 00:24:57.743054 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-28 00:24:57.743059 | orchestrator | Saturday 28 March 2026 00:24:23 +0000 (0:00:00.107) 0:00:04.051 ******** 2026-03-28 00:24:57.743064 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:24:57.743069 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:24:57.743074 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:24:57.743079 | orchestrator | 2026-03-28 00:24:57.743084 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-28 00:24:57.743088 | orchestrator | Saturday 28 March 2026 00:24:24 +0000 (0:00:01.037) 0:00:05.088 ******** 2026-03-28 00:24:57.743093 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:24:57.743099 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:24:57.743104 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:24:57.743109 | orchestrator | 2026-03-28 00:24:57.743114 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-28 00:24:57.743123 | orchestrator | Saturday 28 March 2026 00:24:25 +0000 (0:00:00.435) 0:00:05.524 ******** 2026-03-28 00:24:57.743128 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:24:57.743133 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:24:57.743138 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:24:57.743143 | orchestrator | 2026-03-28 00:24:57.743148 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-28 00:24:57.743153 | orchestrator | Saturday 28 March 2026 00:24:26 +0000 (0:00:01.082) 0:00:06.606 ******** 2026-03-28 00:24:57.743158 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:24:57.743163 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:24:57.743168 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:24:57.743173 | orchestrator | 2026-03-28 00:24:57.743178 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-03-28 00:24:57.743183 | orchestrator | Saturday 28 March 2026 00:24:41 +0000 (0:00:15.378) 0:00:21.984 ******** 2026-03-28 00:24:57.743188 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:24:57.743193 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:24:57.743199 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:24:57.743205 | orchestrator | 2026-03-28 00:24:57.743211 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-03-28 00:24:57.743228 | orchestrator | Saturday 28 March 2026 00:24:41 +0000 (0:00:00.121) 0:00:22.105 ******** 2026-03-28 00:24:57.743234 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:24:57.743240 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:24:57.743246 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:24:57.743275 | orchestrator | 2026-03-28 00:24:57.743281 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-28 00:24:57.743287 | orchestrator | Saturday 28 March 2026 00:24:48 +0000 (0:00:06.981) 0:00:29.086 ******** 2026-03-28 00:24:57.743293 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:24:57.743299 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:24:57.743304 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:24:57.743311 | orchestrator | 2026-03-28 00:24:57.743316 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-28 00:24:57.743322 | orchestrator | Saturday 28 March 2026 00:24:49 +0000 (0:00:00.467) 0:00:29.554 ******** 2026-03-28 00:24:57.743328 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-03-28 00:24:57.743335 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-03-28 00:24:57.743341 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-03-28 00:24:57.743347 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-03-28 00:24:57.743353 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-03-28 00:24:57.743359 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-03-28 00:24:57.743365 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-03-28 00:24:57.743371 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-03-28 00:24:57.743377 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-03-28 00:24:57.743383 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-03-28 00:24:57.743388 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-03-28 00:24:57.743421 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-03-28 00:24:57.743428 | orchestrator | 2026-03-28 00:24:57.743434 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-28 00:24:57.743439 | orchestrator | Saturday 28 March 2026 00:24:52 +0000 (0:00:03.420) 0:00:32.975 ******** 2026-03-28 00:24:57.743446 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:24:57.743451 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:24:57.743457 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:24:57.743463 | orchestrator | 2026-03-28 00:24:57.743469 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-28 00:24:57.743479 | orchestrator | 2026-03-28 00:24:57.743485 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-28 00:24:57.743491 | orchestrator | Saturday 28 March 2026 00:24:54 +0000 (0:00:01.352) 0:00:34.327 ******** 2026-03-28 00:24:57.743496 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:24:57.743502 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:24:57.743508 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:24:57.743514 | orchestrator | ok: [testbed-manager] 2026-03-28 00:24:57.743519 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:24:57.743525 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:24:57.743532 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:24:57.743541 | orchestrator | 2026-03-28 00:24:57.743549 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:24:57.743559 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:24:57.743568 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:24:57.743578 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:24:57.743587 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:24:57.743597 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:24:57.743606 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:24:57.743616 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:24:57.743621 | orchestrator | 2026-03-28 00:24:57.743626 | orchestrator | 2026-03-28 00:24:57.743631 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:24:57.743636 | orchestrator | Saturday 28 March 2026 00:24:57 +0000 (0:00:03.649) 0:00:37.977 ******** 2026-03-28 00:24:57.743641 | orchestrator | =============================================================================== 2026-03-28 00:24:57.743646 | orchestrator | osism.commons.repository : Update package cache ------------------------ 15.38s 2026-03-28 00:24:57.743651 | orchestrator | Install required packages (Debian) -------------------------------------- 6.98s 2026-03-28 00:24:57.743656 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.65s 2026-03-28 00:24:57.743661 | orchestrator | Copy fact files --------------------------------------------------------- 3.42s 2026-03-28 00:24:57.743666 | orchestrator | Create custom facts directory ------------------------------------------- 1.44s 2026-03-28 00:24:57.743671 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.35s 2026-03-28 00:24:57.743680 | orchestrator | Copy fact file ---------------------------------------------------------- 1.22s 2026-03-28 00:24:57.977510 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.08s 2026-03-28 00:24:57.977617 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.04s 2026-03-28 00:24:57.977629 | orchestrator | Create custom facts directory ------------------------------------------- 0.47s 2026-03-28 00:24:57.977637 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.44s 2026-03-28 00:24:57.977645 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.44s 2026-03-28 00:24:57.977653 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.25s 2026-03-28 00:24:57.977661 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.21s 2026-03-28 00:24:57.977669 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.16s 2026-03-28 00:24:57.977696 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.12s 2026-03-28 00:24:57.977704 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.11s 2026-03-28 00:24:57.977711 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.11s 2026-03-28 00:24:58.224884 | orchestrator | + osism apply bootstrap 2026-03-28 00:25:09.610824 | orchestrator | 2026-03-28 00:25:09 | INFO  | Prepare task for execution of bootstrap. 2026-03-28 00:25:09.676419 | orchestrator | 2026-03-28 00:25:09 | INFO  | Task 5667f0a8-4988-4a81-a452-a258813bf0f0 (bootstrap) was prepared for execution. 2026-03-28 00:25:09.676499 | orchestrator | 2026-03-28 00:25:09 | INFO  | It takes a moment until task 5667f0a8-4988-4a81-a452-a258813bf0f0 (bootstrap) has been started and output is visible here. 2026-03-28 00:25:25.176030 | orchestrator | 2026-03-28 00:25:25.176130 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-03-28 00:25:25.176144 | orchestrator | 2026-03-28 00:25:25.176153 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-03-28 00:25:25.176163 | orchestrator | Saturday 28 March 2026 00:25:12 +0000 (0:00:00.194) 0:00:00.194 ******** 2026-03-28 00:25:25.176172 | orchestrator | ok: [testbed-manager] 2026-03-28 00:25:25.176182 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:25:25.176191 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:25:25.176200 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:25:25.176208 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:25:25.176217 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:25:25.176225 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:25:25.176296 | orchestrator | 2026-03-28 00:25:25.176307 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-28 00:25:25.176315 | orchestrator | 2026-03-28 00:25:25.176325 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-28 00:25:25.176334 | orchestrator | Saturday 28 March 2026 00:25:13 +0000 (0:00:00.342) 0:00:00.536 ******** 2026-03-28 00:25:25.176343 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:25:25.176351 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:25:25.176360 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:25:25.176369 | orchestrator | ok: [testbed-manager] 2026-03-28 00:25:25.176378 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:25:25.176386 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:25:25.176395 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:25:25.176409 | orchestrator | 2026-03-28 00:25:25.176423 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-03-28 00:25:25.176437 | orchestrator | 2026-03-28 00:25:25.176452 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-28 00:25:25.176467 | orchestrator | Saturday 28 March 2026 00:25:17 +0000 (0:00:04.633) 0:00:05.170 ******** 2026-03-28 00:25:25.176482 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-28 00:25:25.176498 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-28 00:25:25.176512 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-28 00:25:25.176527 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-03-28 00:25:25.176537 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-28 00:25:25.176546 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-28 00:25:25.176555 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-03-28 00:25:25.176564 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-28 00:25:25.176575 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-28 00:25:25.176585 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-28 00:25:25.176594 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-28 00:25:25.176604 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-28 00:25:25.176640 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-28 00:25:25.176651 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-28 00:25:25.176661 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-28 00:25:25.176671 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-03-28 00:25:25.176682 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-28 00:25:25.176693 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-28 00:25:25.176702 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-28 00:25:25.176712 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-28 00:25:25.176723 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-03-28 00:25:25.176733 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:25:25.176743 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:25:25.176752 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-28 00:25:25.176762 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-28 00:25:25.176772 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-28 00:25:25.176782 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-28 00:25:25.176792 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-28 00:25:25.176802 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-28 00:25:25.176812 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-28 00:25:25.176823 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-28 00:25:25.176833 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-03-28 00:25:25.176843 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:25:25.176853 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-28 00:25:25.176863 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-03-28 00:25:25.176873 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-28 00:25:25.176883 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-28 00:25:25.176892 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-28 00:25:25.176903 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:25:25.176913 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 00:25:25.176923 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-28 00:25:25.176933 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-28 00:25:25.176941 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 00:25:25.176950 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-28 00:25:25.176958 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-28 00:25:25.176967 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 00:25:25.176991 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-28 00:25:25.177000 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:25:25.177009 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-28 00:25:25.177018 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-28 00:25:25.177026 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-28 00:25:25.177035 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-28 00:25:25.177043 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:25:25.177052 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-28 00:25:25.177060 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-28 00:25:25.177069 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:25:25.177077 | orchestrator | 2026-03-28 00:25:25.177086 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-03-28 00:25:25.177095 | orchestrator | 2026-03-28 00:25:25.177103 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-03-28 00:25:25.177119 | orchestrator | Saturday 28 March 2026 00:25:18 +0000 (0:00:00.522) 0:00:05.692 ******** 2026-03-28 00:25:25.177128 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:25:25.177137 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:25:25.177145 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:25:25.177154 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:25:25.177162 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:25:25.177171 | orchestrator | ok: [testbed-manager] 2026-03-28 00:25:25.177182 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:25:25.177197 | orchestrator | 2026-03-28 00:25:25.177210 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-03-28 00:25:25.177224 | orchestrator | Saturday 28 March 2026 00:25:19 +0000 (0:00:01.203) 0:00:06.896 ******** 2026-03-28 00:25:25.177263 | orchestrator | ok: [testbed-manager] 2026-03-28 00:25:25.177278 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:25:25.177292 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:25:25.177301 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:25:25.177309 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:25:25.177318 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:25:25.177326 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:25:25.177335 | orchestrator | 2026-03-28 00:25:25.177343 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-03-28 00:25:25.177352 | orchestrator | Saturday 28 March 2026 00:25:20 +0000 (0:00:01.276) 0:00:08.172 ******** 2026-03-28 00:25:25.177362 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:25:25.177373 | orchestrator | 2026-03-28 00:25:25.177382 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-03-28 00:25:25.177390 | orchestrator | Saturday 28 March 2026 00:25:21 +0000 (0:00:00.310) 0:00:08.483 ******** 2026-03-28 00:25:25.177399 | orchestrator | changed: [testbed-manager] 2026-03-28 00:25:25.177407 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:25:25.177416 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:25:25.177425 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:25:25.177434 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:25:25.177445 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:25:25.177455 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:25:25.177466 | orchestrator | 2026-03-28 00:25:25.177495 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-03-28 00:25:25.177506 | orchestrator | Saturday 28 March 2026 00:25:22 +0000 (0:00:01.484) 0:00:09.968 ******** 2026-03-28 00:25:25.177517 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:25:25.177529 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:25:25.177541 | orchestrator | 2026-03-28 00:25:25.177552 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-03-28 00:25:25.177563 | orchestrator | Saturday 28 March 2026 00:25:22 +0000 (0:00:00.299) 0:00:10.267 ******** 2026-03-28 00:25:25.177573 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:25:25.177584 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:25:25.177600 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:25:25.177611 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:25:25.177622 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:25:25.177632 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:25:25.177643 | orchestrator | 2026-03-28 00:25:25.177654 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-03-28 00:25:25.177665 | orchestrator | Saturday 28 March 2026 00:25:24 +0000 (0:00:01.077) 0:00:11.345 ******** 2026-03-28 00:25:25.177676 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:25:25.177687 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:25:25.177706 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:25:25.177717 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:25:25.177728 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:25:25.177738 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:25:25.177749 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:25:25.177760 | orchestrator | 2026-03-28 00:25:25.177771 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-03-28 00:25:25.177781 | orchestrator | Saturday 28 March 2026 00:25:24 +0000 (0:00:00.576) 0:00:11.921 ******** 2026-03-28 00:25:25.177792 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:25:25.177803 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:25:25.177813 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:25:25.177824 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:25:25.177834 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:25:25.177845 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:25:25.177856 | orchestrator | ok: [testbed-manager] 2026-03-28 00:25:25.177867 | orchestrator | 2026-03-28 00:25:25.177883 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-28 00:25:25.177899 | orchestrator | Saturday 28 March 2026 00:25:25 +0000 (0:00:00.470) 0:00:12.392 ******** 2026-03-28 00:25:25.177917 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:25:25.177936 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:25:25.177968 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:25:37.161626 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:25:37.161728 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:25:37.161741 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:25:37.161750 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:25:37.161760 | orchestrator | 2026-03-28 00:25:37.161770 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-28 00:25:37.161783 | orchestrator | Saturday 28 March 2026 00:25:25 +0000 (0:00:00.225) 0:00:12.617 ******** 2026-03-28 00:25:37.161801 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:25:37.161843 | orchestrator | 2026-03-28 00:25:37.161859 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-28 00:25:37.161875 | orchestrator | Saturday 28 March 2026 00:25:25 +0000 (0:00:00.354) 0:00:12.972 ******** 2026-03-28 00:25:37.161890 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:25:37.161905 | orchestrator | 2026-03-28 00:25:37.161921 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-28 00:25:37.161935 | orchestrator | Saturday 28 March 2026 00:25:25 +0000 (0:00:00.354) 0:00:13.327 ******** 2026-03-28 00:25:37.161951 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:25:37.161966 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:25:37.161980 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:25:37.161995 | orchestrator | ok: [testbed-manager] 2026-03-28 00:25:37.162009 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:25:37.162091 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:25:37.162106 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:25:37.162122 | orchestrator | 2026-03-28 00:25:37.162137 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-28 00:25:37.162152 | orchestrator | Saturday 28 March 2026 00:25:27 +0000 (0:00:01.231) 0:00:14.558 ******** 2026-03-28 00:25:37.162167 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:25:37.162182 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:25:37.162196 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:25:37.162211 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:25:37.162307 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:25:37.162354 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:25:37.162369 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:25:37.162384 | orchestrator | 2026-03-28 00:25:37.162398 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-28 00:25:37.162412 | orchestrator | Saturday 28 March 2026 00:25:27 +0000 (0:00:00.261) 0:00:14.819 ******** 2026-03-28 00:25:37.162426 | orchestrator | ok: [testbed-manager] 2026-03-28 00:25:37.162440 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:25:37.162454 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:25:37.162468 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:25:37.162482 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:25:37.162496 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:25:37.162511 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:25:37.162524 | orchestrator | 2026-03-28 00:25:37.162539 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-28 00:25:37.162553 | orchestrator | Saturday 28 March 2026 00:25:28 +0000 (0:00:00.566) 0:00:15.385 ******** 2026-03-28 00:25:37.162567 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:25:37.162581 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:25:37.162594 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:25:37.162608 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:25:37.162622 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:25:37.162635 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:25:37.162649 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:25:37.162663 | orchestrator | 2026-03-28 00:25:37.162678 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-28 00:25:37.162693 | orchestrator | Saturday 28 March 2026 00:25:28 +0000 (0:00:00.272) 0:00:15.658 ******** 2026-03-28 00:25:37.162718 | orchestrator | ok: [testbed-manager] 2026-03-28 00:25:37.162732 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:25:37.162746 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:25:37.162760 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:25:37.162774 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:25:37.162788 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:25:37.162802 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:25:37.162816 | orchestrator | 2026-03-28 00:25:37.162830 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-28 00:25:37.162845 | orchestrator | Saturday 28 March 2026 00:25:28 +0000 (0:00:00.650) 0:00:16.309 ******** 2026-03-28 00:25:37.162858 | orchestrator | ok: [testbed-manager] 2026-03-28 00:25:37.162873 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:25:37.162886 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:25:37.162901 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:25:37.162914 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:25:37.162928 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:25:37.162942 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:25:37.162955 | orchestrator | 2026-03-28 00:25:37.162969 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-28 00:25:37.162984 | orchestrator | Saturday 28 March 2026 00:25:30 +0000 (0:00:01.129) 0:00:17.439 ******** 2026-03-28 00:25:37.162997 | orchestrator | ok: [testbed-manager] 2026-03-28 00:25:37.163011 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:25:37.163025 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:25:37.163039 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:25:37.163053 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:25:37.163067 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:25:37.163081 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:25:37.163095 | orchestrator | 2026-03-28 00:25:37.163109 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-28 00:25:37.163123 | orchestrator | Saturday 28 March 2026 00:25:31 +0000 (0:00:01.082) 0:00:18.521 ******** 2026-03-28 00:25:37.163162 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:25:37.163187 | orchestrator | 2026-03-28 00:25:37.163202 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-28 00:25:37.163215 | orchestrator | Saturday 28 March 2026 00:25:31 +0000 (0:00:00.334) 0:00:18.855 ******** 2026-03-28 00:25:37.163248 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:25:37.163262 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:25:37.163276 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:25:37.163290 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:25:37.163304 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:25:37.163318 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:25:37.163332 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:25:37.163346 | orchestrator | 2026-03-28 00:25:37.163360 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-28 00:25:37.163374 | orchestrator | Saturday 28 March 2026 00:25:32 +0000 (0:00:01.272) 0:00:20.128 ******** 2026-03-28 00:25:37.163388 | orchestrator | ok: [testbed-manager] 2026-03-28 00:25:37.163402 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:25:37.163416 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:25:37.163430 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:25:37.163444 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:25:37.163457 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:25:37.163471 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:25:37.163485 | orchestrator | 2026-03-28 00:25:37.163499 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-28 00:25:37.163513 | orchestrator | Saturday 28 March 2026 00:25:33 +0000 (0:00:00.249) 0:00:20.378 ******** 2026-03-28 00:25:37.163527 | orchestrator | ok: [testbed-manager] 2026-03-28 00:25:37.163541 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:25:37.163555 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:25:37.163569 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:25:37.163583 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:25:37.163596 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:25:37.163610 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:25:37.163624 | orchestrator | 2026-03-28 00:25:37.163638 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-28 00:25:37.163652 | orchestrator | Saturday 28 March 2026 00:25:33 +0000 (0:00:00.254) 0:00:20.633 ******** 2026-03-28 00:25:37.163666 | orchestrator | ok: [testbed-manager] 2026-03-28 00:25:37.163680 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:25:37.163694 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:25:37.163708 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:25:37.163721 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:25:37.163735 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:25:37.163748 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:25:37.163762 | orchestrator | 2026-03-28 00:25:37.163776 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-28 00:25:37.163790 | orchestrator | Saturday 28 March 2026 00:25:33 +0000 (0:00:00.239) 0:00:20.872 ******** 2026-03-28 00:25:37.163805 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:25:37.163821 | orchestrator | 2026-03-28 00:25:37.163835 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-28 00:25:37.163849 | orchestrator | Saturday 28 March 2026 00:25:33 +0000 (0:00:00.311) 0:00:21.183 ******** 2026-03-28 00:25:37.163863 | orchestrator | ok: [testbed-manager] 2026-03-28 00:25:37.163877 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:25:37.163891 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:25:37.163905 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:25:37.163919 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:25:37.163933 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:25:37.163948 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:25:37.163962 | orchestrator | 2026-03-28 00:25:37.163985 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-28 00:25:37.164000 | orchestrator | Saturday 28 March 2026 00:25:34 +0000 (0:00:00.527) 0:00:21.710 ******** 2026-03-28 00:25:37.164014 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:25:37.164028 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:25:37.164043 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:25:37.164057 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:25:37.164070 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:25:37.164084 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:25:37.164099 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:25:37.164113 | orchestrator | 2026-03-28 00:25:37.164127 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-28 00:25:37.164141 | orchestrator | Saturday 28 March 2026 00:25:34 +0000 (0:00:00.246) 0:00:21.957 ******** 2026-03-28 00:25:37.164155 | orchestrator | ok: [testbed-manager] 2026-03-28 00:25:37.164169 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:25:37.164184 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:25:37.164197 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:25:37.164211 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:25:37.164245 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:25:37.164260 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:25:37.164274 | orchestrator | 2026-03-28 00:25:37.164288 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-28 00:25:37.164301 | orchestrator | Saturday 28 March 2026 00:25:35 +0000 (0:00:01.022) 0:00:22.979 ******** 2026-03-28 00:25:37.164315 | orchestrator | ok: [testbed-manager] 2026-03-28 00:25:37.164329 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:25:37.164343 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:25:37.164357 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:25:37.164371 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:25:37.164385 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:25:37.164399 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:25:37.164413 | orchestrator | 2026-03-28 00:25:37.164427 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-28 00:25:37.164441 | orchestrator | Saturday 28 March 2026 00:25:36 +0000 (0:00:00.550) 0:00:23.530 ******** 2026-03-28 00:25:37.164455 | orchestrator | ok: [testbed-manager] 2026-03-28 00:25:37.164469 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:25:37.164483 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:25:37.164497 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:25:37.164518 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:26:17.354433 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:26:17.354549 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:26:17.354566 | orchestrator | 2026-03-28 00:26:17.354578 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-28 00:26:17.354591 | orchestrator | Saturday 28 March 2026 00:25:37 +0000 (0:00:01.058) 0:00:24.588 ******** 2026-03-28 00:26:17.354601 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:26:17.354612 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:26:17.354622 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:26:17.354632 | orchestrator | changed: [testbed-manager] 2026-03-28 00:26:17.354642 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:26:17.354653 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:26:17.354663 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:26:17.354674 | orchestrator | 2026-03-28 00:26:17.354684 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-03-28 00:26:17.354694 | orchestrator | Saturday 28 March 2026 00:25:52 +0000 (0:00:15.124) 0:00:39.712 ******** 2026-03-28 00:26:17.354704 | orchestrator | ok: [testbed-manager] 2026-03-28 00:26:17.354714 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:26:17.354724 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:26:17.354734 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:26:17.354744 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:26:17.354754 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:26:17.354763 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:26:17.354799 | orchestrator | 2026-03-28 00:26:17.354810 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-03-28 00:26:17.354820 | orchestrator | Saturday 28 March 2026 00:25:52 +0000 (0:00:00.222) 0:00:39.935 ******** 2026-03-28 00:26:17.354830 | orchestrator | ok: [testbed-manager] 2026-03-28 00:26:17.354840 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:26:17.354850 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:26:17.354860 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:26:17.354870 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:26:17.354879 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:26:17.354890 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:26:17.354900 | orchestrator | 2026-03-28 00:26:17.354910 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-03-28 00:26:17.354920 | orchestrator | Saturday 28 March 2026 00:25:52 +0000 (0:00:00.226) 0:00:40.161 ******** 2026-03-28 00:26:17.354932 | orchestrator | ok: [testbed-manager] 2026-03-28 00:26:17.354942 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:26:17.354953 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:26:17.354960 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:26:17.354983 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:26:17.354990 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:26:17.354997 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:26:17.355004 | orchestrator | 2026-03-28 00:26:17.355012 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-03-28 00:26:17.355019 | orchestrator | Saturday 28 March 2026 00:25:53 +0000 (0:00:00.226) 0:00:40.388 ******** 2026-03-28 00:26:17.355029 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:26:17.355038 | orchestrator | 2026-03-28 00:26:17.355045 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-03-28 00:26:17.355053 | orchestrator | Saturday 28 March 2026 00:25:53 +0000 (0:00:00.321) 0:00:40.709 ******** 2026-03-28 00:26:17.355060 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:26:17.355067 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:26:17.355074 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:26:17.355081 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:26:17.355088 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:26:17.355095 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:26:17.355102 | orchestrator | ok: [testbed-manager] 2026-03-28 00:26:17.355109 | orchestrator | 2026-03-28 00:26:17.355116 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-03-28 00:26:17.355123 | orchestrator | Saturday 28 March 2026 00:25:54 +0000 (0:00:01.586) 0:00:42.296 ******** 2026-03-28 00:26:17.355130 | orchestrator | changed: [testbed-manager] 2026-03-28 00:26:17.355137 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:26:17.355145 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:26:17.355152 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:26:17.355163 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:26:17.355171 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:26:17.355178 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:26:17.355185 | orchestrator | 2026-03-28 00:26:17.355192 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-03-28 00:26:17.355229 | orchestrator | Saturday 28 March 2026 00:25:56 +0000 (0:00:01.069) 0:00:43.366 ******** 2026-03-28 00:26:17.355236 | orchestrator | ok: [testbed-manager] 2026-03-28 00:26:17.355243 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:26:17.355250 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:26:17.355257 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:26:17.355264 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:26:17.355271 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:26:17.355277 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:26:17.355284 | orchestrator | 2026-03-28 00:26:17.355292 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-03-28 00:26:17.355307 | orchestrator | Saturday 28 March 2026 00:25:56 +0000 (0:00:00.788) 0:00:44.155 ******** 2026-03-28 00:26:17.355315 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:26:17.355324 | orchestrator | 2026-03-28 00:26:17.355330 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-03-28 00:26:17.355337 | orchestrator | Saturday 28 March 2026 00:25:57 +0000 (0:00:00.330) 0:00:44.485 ******** 2026-03-28 00:26:17.355343 | orchestrator | changed: [testbed-manager] 2026-03-28 00:26:17.355349 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:26:17.355355 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:26:17.355362 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:26:17.355368 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:26:17.355374 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:26:17.355380 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:26:17.355386 | orchestrator | 2026-03-28 00:26:17.355410 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-03-28 00:26:17.355417 | orchestrator | Saturday 28 March 2026 00:25:58 +0000 (0:00:01.073) 0:00:45.558 ******** 2026-03-28 00:26:17.355424 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:26:17.355430 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:26:17.355436 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:26:17.355442 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:26:17.355448 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:26:17.355454 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:26:17.355460 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:26:17.355467 | orchestrator | 2026-03-28 00:26:17.355473 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-03-28 00:26:17.355479 | orchestrator | Saturday 28 March 2026 00:25:58 +0000 (0:00:00.236) 0:00:45.795 ******** 2026-03-28 00:26:17.355485 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:26:17.355492 | orchestrator | 2026-03-28 00:26:17.355498 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-03-28 00:26:17.355504 | orchestrator | Saturday 28 March 2026 00:25:58 +0000 (0:00:00.320) 0:00:46.115 ******** 2026-03-28 00:26:17.355510 | orchestrator | ok: [testbed-manager] 2026-03-28 00:26:17.355516 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:26:17.355522 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:26:17.355528 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:26:17.355535 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:26:17.355541 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:26:17.355547 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:26:17.355553 | orchestrator | 2026-03-28 00:26:17.355559 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-03-28 00:26:17.355565 | orchestrator | Saturday 28 March 2026 00:26:00 +0000 (0:00:01.754) 0:00:47.870 ******** 2026-03-28 00:26:17.355571 | orchestrator | changed: [testbed-manager] 2026-03-28 00:26:17.355577 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:26:17.355583 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:26:17.355590 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:26:17.355596 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:26:17.355602 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:26:17.355608 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:26:17.355614 | orchestrator | 2026-03-28 00:26:17.355620 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-03-28 00:26:17.355626 | orchestrator | Saturday 28 March 2026 00:26:01 +0000 (0:00:01.131) 0:00:49.001 ******** 2026-03-28 00:26:17.355632 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:26:17.355638 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:26:17.355649 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:26:17.355655 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:26:17.355661 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:26:17.355667 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:26:17.355673 | orchestrator | changed: [testbed-manager] 2026-03-28 00:26:17.355679 | orchestrator | 2026-03-28 00:26:17.355685 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-03-28 00:26:17.355692 | orchestrator | Saturday 28 March 2026 00:26:14 +0000 (0:00:12.451) 0:01:01.453 ******** 2026-03-28 00:26:17.355698 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:26:17.355704 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:26:17.355710 | orchestrator | ok: [testbed-manager] 2026-03-28 00:26:17.355716 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:26:17.355722 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:26:17.355728 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:26:17.355734 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:26:17.355740 | orchestrator | 2026-03-28 00:26:17.355746 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-03-28 00:26:17.355753 | orchestrator | Saturday 28 March 2026 00:26:15 +0000 (0:00:01.567) 0:01:03.021 ******** 2026-03-28 00:26:17.355759 | orchestrator | ok: [testbed-manager] 2026-03-28 00:26:17.355765 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:26:17.355771 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:26:17.355777 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:26:17.355783 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:26:17.355789 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:26:17.355799 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:26:17.355805 | orchestrator | 2026-03-28 00:26:17.355811 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-03-28 00:26:17.355818 | orchestrator | Saturday 28 March 2026 00:26:16 +0000 (0:00:00.856) 0:01:03.877 ******** 2026-03-28 00:26:17.355824 | orchestrator | ok: [testbed-manager] 2026-03-28 00:26:17.355830 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:26:17.355836 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:26:17.355842 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:26:17.355848 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:26:17.355854 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:26:17.355860 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:26:17.355866 | orchestrator | 2026-03-28 00:26:17.355872 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-03-28 00:26:17.355878 | orchestrator | Saturday 28 March 2026 00:26:16 +0000 (0:00:00.239) 0:01:04.116 ******** 2026-03-28 00:26:17.355884 | orchestrator | ok: [testbed-manager] 2026-03-28 00:26:17.355890 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:26:17.355897 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:26:17.355903 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:26:17.355909 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:26:17.355915 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:26:17.355921 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:26:17.355927 | orchestrator | 2026-03-28 00:26:17.355933 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-03-28 00:26:17.355939 | orchestrator | Saturday 28 March 2026 00:26:17 +0000 (0:00:00.235) 0:01:04.352 ******** 2026-03-28 00:26:17.355945 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:26:17.355952 | orchestrator | 2026-03-28 00:26:17.355963 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-03-28 00:28:44.390940 | orchestrator | Saturday 28 March 2026 00:26:17 +0000 (0:00:00.333) 0:01:04.685 ******** 2026-03-28 00:28:44.391077 | orchestrator | ok: [testbed-manager] 2026-03-28 00:28:44.391104 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:28:44.391193 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:28:44.391206 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:28:44.391324 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:28:44.391339 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:28:44.391350 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:28:44.391361 | orchestrator | 2026-03-28 00:28:44.391373 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-03-28 00:28:44.391384 | orchestrator | Saturday 28 March 2026 00:26:19 +0000 (0:00:01.871) 0:01:06.557 ******** 2026-03-28 00:28:44.391395 | orchestrator | changed: [testbed-manager] 2026-03-28 00:28:44.391406 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:28:44.391417 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:28:44.391428 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:28:44.391440 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:28:44.391453 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:28:44.391465 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:28:44.391477 | orchestrator | 2026-03-28 00:28:44.391490 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-03-28 00:28:44.391503 | orchestrator | Saturday 28 March 2026 00:26:19 +0000 (0:00:00.535) 0:01:07.093 ******** 2026-03-28 00:28:44.391515 | orchestrator | ok: [testbed-manager] 2026-03-28 00:28:44.391528 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:28:44.391540 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:28:44.391552 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:28:44.391565 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:28:44.391577 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:28:44.391590 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:28:44.391602 | orchestrator | 2026-03-28 00:28:44.391615 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-03-28 00:28:44.391627 | orchestrator | Saturday 28 March 2026 00:26:19 +0000 (0:00:00.243) 0:01:07.336 ******** 2026-03-28 00:28:44.391640 | orchestrator | ok: [testbed-manager] 2026-03-28 00:28:44.391652 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:28:44.391664 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:28:44.391676 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:28:44.391689 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:28:44.391701 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:28:44.391713 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:28:44.391725 | orchestrator | 2026-03-28 00:28:44.391738 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-03-28 00:28:44.391751 | orchestrator | Saturday 28 March 2026 00:26:21 +0000 (0:00:01.235) 0:01:08.572 ******** 2026-03-28 00:28:44.391763 | orchestrator | changed: [testbed-manager] 2026-03-28 00:28:44.391783 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:28:44.391802 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:28:44.391820 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:28:44.391838 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:28:44.391855 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:28:44.391873 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:28:44.391890 | orchestrator | 2026-03-28 00:28:44.391907 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-03-28 00:28:44.391927 | orchestrator | Saturday 28 March 2026 00:26:23 +0000 (0:00:02.022) 0:01:10.594 ******** 2026-03-28 00:28:44.391946 | orchestrator | ok: [testbed-manager] 2026-03-28 00:28:44.391964 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:28:44.391981 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:28:44.391998 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:28:44.392014 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:28:44.392031 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:28:44.392049 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:28:44.392066 | orchestrator | 2026-03-28 00:28:44.392084 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-03-28 00:28:44.392102 | orchestrator | Saturday 28 March 2026 00:26:25 +0000 (0:00:02.719) 0:01:13.313 ******** 2026-03-28 00:28:44.392150 | orchestrator | ok: [testbed-manager] 2026-03-28 00:28:44.392169 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:28:44.392187 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:28:44.392221 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:28:44.392240 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:28:44.392258 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:28:44.392275 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:28:44.392294 | orchestrator | 2026-03-28 00:28:44.392313 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-03-28 00:28:44.392331 | orchestrator | Saturday 28 March 2026 00:27:06 +0000 (0:00:40.843) 0:01:54.158 ******** 2026-03-28 00:28:44.392351 | orchestrator | changed: [testbed-manager] 2026-03-28 00:28:44.392364 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:28:44.392375 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:28:44.392385 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:28:44.392396 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:28:44.392407 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:28:44.392417 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:28:44.392428 | orchestrator | 2026-03-28 00:28:44.392446 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-03-28 00:28:44.392473 | orchestrator | Saturday 28 March 2026 00:28:27 +0000 (0:01:20.773) 0:03:14.931 ******** 2026-03-28 00:28:44.392495 | orchestrator | ok: [testbed-manager] 2026-03-28 00:28:44.392534 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:28:44.392553 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:28:44.392571 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:28:44.392588 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:28:44.392605 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:28:44.392622 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:28:44.392641 | orchestrator | 2026-03-28 00:28:44.392660 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-03-28 00:28:44.392680 | orchestrator | Saturday 28 March 2026 00:28:29 +0000 (0:00:01.777) 0:03:16.709 ******** 2026-03-28 00:28:44.392694 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:28:44.392704 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:28:44.392715 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:28:44.392726 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:28:44.392737 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:28:44.392747 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:28:44.392758 | orchestrator | changed: [testbed-manager] 2026-03-28 00:28:44.392769 | orchestrator | 2026-03-28 00:28:44.392779 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-03-28 00:28:44.392790 | orchestrator | Saturday 28 March 2026 00:28:43 +0000 (0:00:13.773) 0:03:30.482 ******** 2026-03-28 00:28:44.392843 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-03-28 00:28:44.392867 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-03-28 00:28:44.392883 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-03-28 00:28:44.392911 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-28 00:28:44.392923 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-28 00:28:44.392945 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-03-28 00:28:44.392956 | orchestrator | 2026-03-28 00:28:44.392968 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-03-28 00:28:44.392979 | orchestrator | Saturday 28 March 2026 00:28:43 +0000 (0:00:00.444) 0:03:30.926 ******** 2026-03-28 00:28:44.392990 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-28 00:28:44.393001 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:28:44.393017 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-28 00:28:44.393028 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:28:44.393039 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-28 00:28:44.393049 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:28:44.393060 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-28 00:28:44.393071 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:28:44.393081 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-28 00:28:44.393092 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-28 00:28:44.393103 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-28 00:28:44.393144 | orchestrator | 2026-03-28 00:28:44.393164 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-03-28 00:28:44.393184 | orchestrator | Saturday 28 March 2026 00:28:44 +0000 (0:00:00.719) 0:03:31.646 ******** 2026-03-28 00:28:44.393202 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-28 00:28:44.393241 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-28 00:28:44.393254 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-28 00:28:44.393265 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-28 00:28:44.393291 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-28 00:28:44.393311 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-28 00:28:51.107392 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-28 00:28:51.107499 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-28 00:28:51.107515 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-28 00:28:51.107524 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-28 00:28:51.107533 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:28:51.107541 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-28 00:28:51.107567 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-28 00:28:51.107574 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-28 00:28:51.107580 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-28 00:28:51.107586 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-28 00:28:51.107593 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-28 00:28:51.107599 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-28 00:28:51.107605 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-28 00:28:51.107611 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-28 00:28:51.107617 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-28 00:28:51.107623 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-28 00:28:51.107629 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-28 00:28:51.107636 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-28 00:28:51.107642 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-28 00:28:51.107648 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-28 00:28:51.107655 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-28 00:28:51.107661 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-28 00:28:51.107667 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-28 00:28:51.107673 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-28 00:28:51.107679 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-28 00:28:51.107685 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-28 00:28:51.107691 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:28:51.107698 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:28:51.107704 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-28 00:28:51.107721 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-28 00:28:51.107728 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-28 00:28:51.107734 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-28 00:28:51.107740 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-28 00:28:51.107746 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-28 00:28:51.107752 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-28 00:28:51.107758 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-28 00:28:51.107764 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-28 00:28:51.107770 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:28:51.107777 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-28 00:28:51.107788 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-28 00:28:51.107794 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-28 00:28:51.107800 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-28 00:28:51.107806 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-28 00:28:51.107826 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-28 00:28:51.107833 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-28 00:28:51.107840 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-28 00:28:51.107846 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-28 00:28:51.107852 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-28 00:28:51.107858 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-28 00:28:51.107864 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-28 00:28:51.107870 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-28 00:28:51.107878 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-28 00:28:51.107888 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-28 00:28:51.107899 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-28 00:28:51.107909 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-28 00:28:51.107919 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-28 00:28:51.107929 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-28 00:28:51.107939 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-28 00:28:51.107948 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-28 00:28:51.107957 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-28 00:28:51.107967 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-28 00:28:51.107976 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-28 00:28:51.107986 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-28 00:28:51.107997 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-28 00:28:51.108007 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-28 00:28:51.108017 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-28 00:28:51.108028 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-28 00:28:51.108038 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-28 00:28:51.108049 | orchestrator | 2026-03-28 00:28:51.108060 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-03-28 00:28:51.108072 | orchestrator | Saturday 28 March 2026 00:28:48 +0000 (0:00:04.669) 0:03:36.316 ******** 2026-03-28 00:28:51.108079 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-28 00:28:51.108092 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-28 00:28:51.108134 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-28 00:28:51.108146 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-28 00:28:51.108156 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-28 00:28:51.108165 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-28 00:28:51.108176 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-28 00:28:51.108187 | orchestrator | 2026-03-28 00:28:51.108197 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-03-28 00:28:51.108208 | orchestrator | Saturday 28 March 2026 00:28:49 +0000 (0:00:00.583) 0:03:36.900 ******** 2026-03-28 00:28:51.108218 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-28 00:28:51.108229 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-28 00:28:51.108240 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:28:51.108248 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-28 00:28:51.108254 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:28:51.108260 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-28 00:28:51.108266 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:28:51.108272 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:28:51.108279 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-28 00:28:51.108285 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-28 00:28:51.108298 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-28 00:29:04.422096 | orchestrator | 2026-03-28 00:29:04.422212 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-03-28 00:29:04.422223 | orchestrator | Saturday 28 March 2026 00:28:51 +0000 (0:00:01.584) 0:03:38.484 ******** 2026-03-28 00:29:04.422230 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-28 00:29:04.422240 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:29:04.422249 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-28 00:29:04.422257 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-28 00:29:04.422265 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:29:04.422272 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:29:04.422280 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-28 00:29:04.422288 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:29:04.422293 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-28 00:29:04.422298 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-28 00:29:04.422302 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-28 00:29:04.422307 | orchestrator | 2026-03-28 00:29:04.422312 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-03-28 00:29:04.422316 | orchestrator | Saturday 28 March 2026 00:28:51 +0000 (0:00:00.497) 0:03:38.982 ******** 2026-03-28 00:29:04.422321 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-28 00:29:04.422325 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:29:04.422351 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-28 00:29:04.422359 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:29:04.422366 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-28 00:29:04.422377 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-28 00:29:04.422386 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:29:04.422393 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:29:04.422399 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-28 00:29:04.422422 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-28 00:29:04.422429 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-28 00:29:04.422448 | orchestrator | 2026-03-28 00:29:04.422467 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-03-28 00:29:04.422475 | orchestrator | Saturday 28 March 2026 00:28:52 +0000 (0:00:00.756) 0:03:39.738 ******** 2026-03-28 00:29:04.422482 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:29:04.422497 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:29:04.422505 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:29:04.422510 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:29:04.422514 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:29:04.422520 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:29:04.422527 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:29:04.422534 | orchestrator | 2026-03-28 00:29:04.422545 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-03-28 00:29:04.422553 | orchestrator | Saturday 28 March 2026 00:28:52 +0000 (0:00:00.300) 0:03:40.039 ******** 2026-03-28 00:29:04.422560 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:29:04.422568 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:29:04.422575 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:29:04.422583 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:29:04.422590 | orchestrator | ok: [testbed-manager] 2026-03-28 00:29:04.422598 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:29:04.422607 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:29:04.422614 | orchestrator | 2026-03-28 00:29:04.422622 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-03-28 00:29:04.422629 | orchestrator | Saturday 28 March 2026 00:28:58 +0000 (0:00:06.039) 0:03:46.078 ******** 2026-03-28 00:29:04.422637 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-03-28 00:29:04.422645 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-03-28 00:29:04.422652 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:29:04.422660 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-03-28 00:29:04.422669 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:29:04.422677 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:29:04.422683 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-03-28 00:29:04.422688 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-03-28 00:29:04.422694 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:29:04.422699 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-03-28 00:29:04.422704 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:29:04.422709 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:29:04.422714 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-03-28 00:29:04.422720 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:29:04.422725 | orchestrator | 2026-03-28 00:29:04.422730 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-03-28 00:29:04.422735 | orchestrator | Saturday 28 March 2026 00:28:59 +0000 (0:00:00.352) 0:03:46.430 ******** 2026-03-28 00:29:04.422740 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-03-28 00:29:04.422746 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-03-28 00:29:04.422761 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-03-28 00:29:04.422786 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-03-28 00:29:04.422795 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-03-28 00:29:04.422802 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-03-28 00:29:04.422809 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-03-28 00:29:04.422817 | orchestrator | 2026-03-28 00:29:04.422825 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-03-28 00:29:04.422833 | orchestrator | Saturday 28 March 2026 00:29:00 +0000 (0:00:01.071) 0:03:47.502 ******** 2026-03-28 00:29:04.422843 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:29:04.422853 | orchestrator | 2026-03-28 00:29:04.422874 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-03-28 00:29:04.422880 | orchestrator | Saturday 28 March 2026 00:29:00 +0000 (0:00:00.500) 0:03:48.002 ******** 2026-03-28 00:29:04.422885 | orchestrator | ok: [testbed-manager] 2026-03-28 00:29:04.422891 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:29:04.422896 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:29:04.422901 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:29:04.422906 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:29:04.422911 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:29:04.422916 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:29:04.422921 | orchestrator | 2026-03-28 00:29:04.422927 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-03-28 00:29:04.422932 | orchestrator | Saturday 28 March 2026 00:29:02 +0000 (0:00:01.367) 0:03:49.370 ******** 2026-03-28 00:29:04.422937 | orchestrator | ok: [testbed-manager] 2026-03-28 00:29:04.422942 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:29:04.422947 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:29:04.422952 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:29:04.422957 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:29:04.422962 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:29:04.422966 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:29:04.422970 | orchestrator | 2026-03-28 00:29:04.422975 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-03-28 00:29:04.422979 | orchestrator | Saturday 28 March 2026 00:29:02 +0000 (0:00:00.585) 0:03:49.956 ******** 2026-03-28 00:29:04.422983 | orchestrator | changed: [testbed-manager] 2026-03-28 00:29:04.422988 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:29:04.422992 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:29:04.422998 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:29:04.423005 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:29:04.423012 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:29:04.423019 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:29:04.423026 | orchestrator | 2026-03-28 00:29:04.423033 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-03-28 00:29:04.423040 | orchestrator | Saturday 28 March 2026 00:29:03 +0000 (0:00:00.637) 0:03:50.594 ******** 2026-03-28 00:29:04.423047 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:29:04.423054 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:29:04.423061 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:29:04.423069 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:29:04.423076 | orchestrator | ok: [testbed-manager] 2026-03-28 00:29:04.423082 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:29:04.423090 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:29:04.423097 | orchestrator | 2026-03-28 00:29:04.423125 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-03-28 00:29:04.423130 | orchestrator | Saturday 28 March 2026 00:29:03 +0000 (0:00:00.602) 0:03:51.196 ******** 2026-03-28 00:29:04.423140 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774656317.8506553, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 00:29:04.423152 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774656337.210793, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 00:29:04.423157 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774656340.894995, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 00:29:04.423176 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774656344.3094888, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 00:29:10.042608 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774656330.1003706, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 00:29:10.042703 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774656369.4407127, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 00:29:10.042714 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774656325.6719482, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 00:29:10.042731 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 00:29:10.042752 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 00:29:10.042756 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 00:29:10.042760 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 00:29:10.042782 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 00:29:10.042786 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 00:29:10.042790 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-28 00:29:10.042795 | orchestrator | 2026-03-28 00:29:10.042800 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-03-28 00:29:10.042809 | orchestrator | Saturday 28 March 2026 00:29:04 +0000 (0:00:01.042) 0:03:52.239 ******** 2026-03-28 00:29:10.042813 | orchestrator | changed: [testbed-manager] 2026-03-28 00:29:10.042818 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:29:10.042822 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:29:10.042825 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:29:10.042829 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:29:10.042833 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:29:10.042836 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:29:10.042840 | orchestrator | 2026-03-28 00:29:10.042844 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-03-28 00:29:10.042847 | orchestrator | Saturday 28 March 2026 00:29:06 +0000 (0:00:01.151) 0:03:53.390 ******** 2026-03-28 00:29:10.042851 | orchestrator | changed: [testbed-manager] 2026-03-28 00:29:10.042855 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:29:10.042861 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:29:10.042865 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:29:10.042869 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:29:10.042872 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:29:10.042876 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:29:10.042880 | orchestrator | 2026-03-28 00:29:10.042884 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-03-28 00:29:10.042887 | orchestrator | Saturday 28 March 2026 00:29:07 +0000 (0:00:01.172) 0:03:54.563 ******** 2026-03-28 00:29:10.042891 | orchestrator | changed: [testbed-manager] 2026-03-28 00:29:10.042895 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:29:10.042898 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:29:10.042902 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:29:10.042906 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:29:10.042909 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:29:10.042913 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:29:10.042917 | orchestrator | 2026-03-28 00:29:10.042921 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-03-28 00:29:10.042925 | orchestrator | Saturday 28 March 2026 00:29:08 +0000 (0:00:01.298) 0:03:55.862 ******** 2026-03-28 00:29:10.042930 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:29:10.042936 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:29:10.042945 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:29:10.042953 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:29:10.042971 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:29:10.042977 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:29:10.042982 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:29:10.042988 | orchestrator | 2026-03-28 00:29:10.042993 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-03-28 00:29:10.042999 | orchestrator | Saturday 28 March 2026 00:29:08 +0000 (0:00:00.291) 0:03:56.154 ******** 2026-03-28 00:29:10.043005 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:29:10.043012 | orchestrator | ok: [testbed-manager] 2026-03-28 00:29:10.043017 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:29:10.043023 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:29:10.043029 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:29:10.043034 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:29:10.043040 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:29:10.043045 | orchestrator | 2026-03-28 00:29:10.043051 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-03-28 00:29:10.043057 | orchestrator | Saturday 28 March 2026 00:29:09 +0000 (0:00:00.766) 0:03:56.921 ******** 2026-03-28 00:29:10.043066 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:29:10.043074 | orchestrator | 2026-03-28 00:29:10.043080 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-03-28 00:29:10.043090 | orchestrator | Saturday 28 March 2026 00:29:10 +0000 (0:00:00.453) 0:03:57.374 ******** 2026-03-28 00:30:30.289053 | orchestrator | ok: [testbed-manager] 2026-03-28 00:30:30.289194 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:30:30.289209 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:30:30.289221 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:30:30.289231 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:30:30.289243 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:30:30.289254 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:30:30.289265 | orchestrator | 2026-03-28 00:30:30.289278 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-03-28 00:30:30.289290 | orchestrator | Saturday 28 March 2026 00:29:18 +0000 (0:00:08.401) 0:04:05.776 ******** 2026-03-28 00:30:30.289301 | orchestrator | ok: [testbed-manager] 2026-03-28 00:30:30.289312 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:30:30.289322 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:30:30.289333 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:30:30.289343 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:30:30.289354 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:30:30.289365 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:30:30.289375 | orchestrator | 2026-03-28 00:30:30.289386 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-03-28 00:30:30.289397 | orchestrator | Saturday 28 March 2026 00:29:19 +0000 (0:00:01.349) 0:04:07.126 ******** 2026-03-28 00:30:30.289408 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:30:30.289419 | orchestrator | ok: [testbed-manager] 2026-03-28 00:30:30.289429 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:30:30.289440 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:30:30.289450 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:30:30.289461 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:30:30.289472 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:30:30.289482 | orchestrator | 2026-03-28 00:30:30.289493 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-03-28 00:30:30.289504 | orchestrator | Saturday 28 March 2026 00:29:20 +0000 (0:00:01.008) 0:04:08.134 ******** 2026-03-28 00:30:30.289515 | orchestrator | ok: [testbed-manager] 2026-03-28 00:30:30.289525 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:30:30.289536 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:30:30.289546 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:30:30.289557 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:30:30.289568 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:30:30.289578 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:30:30.289591 | orchestrator | 2026-03-28 00:30:30.289603 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-03-28 00:30:30.289617 | orchestrator | Saturday 28 March 2026 00:29:21 +0000 (0:00:00.290) 0:04:08.424 ******** 2026-03-28 00:30:30.289629 | orchestrator | ok: [testbed-manager] 2026-03-28 00:30:30.289641 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:30:30.289654 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:30:30.289666 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:30:30.289678 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:30:30.289691 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:30:30.289703 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:30:30.289716 | orchestrator | 2026-03-28 00:30:30.289728 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-03-28 00:30:30.289741 | orchestrator | Saturday 28 March 2026 00:29:21 +0000 (0:00:00.332) 0:04:08.757 ******** 2026-03-28 00:30:30.289753 | orchestrator | ok: [testbed-manager] 2026-03-28 00:30:30.289766 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:30:30.289778 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:30:30.289790 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:30:30.289803 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:30:30.289815 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:30:30.289827 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:30:30.289841 | orchestrator | 2026-03-28 00:30:30.289853 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-03-28 00:30:30.289891 | orchestrator | Saturday 28 March 2026 00:29:21 +0000 (0:00:00.357) 0:04:09.114 ******** 2026-03-28 00:30:30.289902 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:30:30.289913 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:30:30.289924 | orchestrator | ok: [testbed-manager] 2026-03-28 00:30:30.289934 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:30:30.289945 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:30:30.289956 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:30:30.289966 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:30:30.289977 | orchestrator | 2026-03-28 00:30:30.289988 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-03-28 00:30:30.289999 | orchestrator | Saturday 28 March 2026 00:29:27 +0000 (0:00:05.688) 0:04:14.803 ******** 2026-03-28 00:30:30.290011 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:30:30.290110 | orchestrator | 2026-03-28 00:30:30.290122 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-03-28 00:30:30.290134 | orchestrator | Saturday 28 March 2026 00:29:27 +0000 (0:00:00.434) 0:04:15.237 ******** 2026-03-28 00:30:30.290145 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-03-28 00:30:30.290155 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-03-28 00:30:30.290166 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:30:30.290177 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-03-28 00:30:30.290188 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-03-28 00:30:30.290199 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-03-28 00:30:30.290210 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-03-28 00:30:30.290221 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:30:30.290231 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:30:30.290242 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-03-28 00:30:30.290253 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-03-28 00:30:30.290263 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:30:30.290274 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-03-28 00:30:30.290285 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-03-28 00:30:30.290296 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-03-28 00:30:30.290307 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-03-28 00:30:30.290335 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:30:30.290347 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:30:30.290358 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-03-28 00:30:30.290368 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-03-28 00:30:30.290379 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:30:30.290390 | orchestrator | 2026-03-28 00:30:30.290400 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-03-28 00:30:30.290427 | orchestrator | Saturday 28 March 2026 00:29:28 +0000 (0:00:00.380) 0:04:15.618 ******** 2026-03-28 00:30:30.290439 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:30:30.290450 | orchestrator | 2026-03-28 00:30:30.290461 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-03-28 00:30:30.290472 | orchestrator | Saturday 28 March 2026 00:29:28 +0000 (0:00:00.591) 0:04:16.209 ******** 2026-03-28 00:30:30.290482 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-03-28 00:30:30.290493 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-03-28 00:30:30.290504 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:30:30.290524 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-03-28 00:30:30.290535 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:30:30.290554 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:30:30.290572 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-03-28 00:30:30.290590 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:30:30.290608 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-03-28 00:30:30.290624 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-03-28 00:30:30.290642 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:30:30.290659 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:30:30.290677 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-03-28 00:30:30.290694 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:30:30.290712 | orchestrator | 2026-03-28 00:30:30.290730 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-03-28 00:30:30.290747 | orchestrator | Saturday 28 March 2026 00:29:29 +0000 (0:00:00.367) 0:04:16.577 ******** 2026-03-28 00:30:30.290763 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:30:30.290780 | orchestrator | 2026-03-28 00:30:30.290798 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-03-28 00:30:30.290825 | orchestrator | Saturday 28 March 2026 00:29:29 +0000 (0:00:00.465) 0:04:17.042 ******** 2026-03-28 00:30:30.290846 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:30:30.290865 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:30:30.290883 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:30:30.290901 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:30:30.290919 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:30:30.290937 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:30:30.290953 | orchestrator | changed: [testbed-manager] 2026-03-28 00:30:30.290964 | orchestrator | 2026-03-28 00:30:30.290974 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-03-28 00:30:30.290985 | orchestrator | Saturday 28 March 2026 00:30:05 +0000 (0:00:35.742) 0:04:52.784 ******** 2026-03-28 00:30:30.290995 | orchestrator | changed: [testbed-manager] 2026-03-28 00:30:30.291006 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:30:30.291017 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:30:30.291027 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:30:30.291038 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:30:30.291049 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:30:30.291059 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:30:30.291131 | orchestrator | 2026-03-28 00:30:30.291143 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-03-28 00:30:30.291154 | orchestrator | Saturday 28 March 2026 00:30:14 +0000 (0:00:08.866) 0:05:01.651 ******** 2026-03-28 00:30:30.291165 | orchestrator | changed: [testbed-manager] 2026-03-28 00:30:30.291175 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:30:30.291186 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:30:30.291197 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:30:30.291208 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:30:30.291219 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:30:30.291230 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:30:30.291240 | orchestrator | 2026-03-28 00:30:30.291251 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-03-28 00:30:30.291262 | orchestrator | Saturday 28 March 2026 00:30:22 +0000 (0:00:08.199) 0:05:09.850 ******** 2026-03-28 00:30:30.291273 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:30:30.291284 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:30:30.291295 | orchestrator | ok: [testbed-manager] 2026-03-28 00:30:30.291306 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:30:30.291327 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:30:30.291338 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:30:30.291349 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:30:30.291360 | orchestrator | 2026-03-28 00:30:30.291371 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-03-28 00:30:30.291382 | orchestrator | Saturday 28 March 2026 00:30:24 +0000 (0:00:01.635) 0:05:11.486 ******** 2026-03-28 00:30:30.291393 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:30:30.291404 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:30:30.291415 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:30:30.291425 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:30:30.291436 | orchestrator | changed: [testbed-manager] 2026-03-28 00:30:30.291447 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:30:30.291458 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:30:30.291469 | orchestrator | 2026-03-28 00:30:30.291492 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-03-28 00:30:42.036402 | orchestrator | Saturday 28 March 2026 00:30:30 +0000 (0:00:06.131) 0:05:17.618 ******** 2026-03-28 00:30:42.036520 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:30:42.036540 | orchestrator | 2026-03-28 00:30:42.036552 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-03-28 00:30:42.036564 | orchestrator | Saturday 28 March 2026 00:30:30 +0000 (0:00:00.445) 0:05:18.063 ******** 2026-03-28 00:30:42.036575 | orchestrator | changed: [testbed-manager] 2026-03-28 00:30:42.036587 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:30:42.036597 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:30:42.036608 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:30:42.036618 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:30:42.036629 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:30:42.036640 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:30:42.036651 | orchestrator | 2026-03-28 00:30:42.036662 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-03-28 00:30:42.036672 | orchestrator | Saturday 28 March 2026 00:30:31 +0000 (0:00:00.780) 0:05:18.844 ******** 2026-03-28 00:30:42.036683 | orchestrator | ok: [testbed-manager] 2026-03-28 00:30:42.036695 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:30:42.036705 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:30:42.036716 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:30:42.036727 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:30:42.036737 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:30:42.036748 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:30:42.036759 | orchestrator | 2026-03-28 00:30:42.036769 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-03-28 00:30:42.036780 | orchestrator | Saturday 28 March 2026 00:30:33 +0000 (0:00:01.846) 0:05:20.690 ******** 2026-03-28 00:30:42.036791 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:30:42.036802 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:30:42.036812 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:30:42.036823 | orchestrator | changed: [testbed-manager] 2026-03-28 00:30:42.036833 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:30:42.036844 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:30:42.036855 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:30:42.036865 | orchestrator | 2026-03-28 00:30:42.036876 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-03-28 00:30:42.036887 | orchestrator | Saturday 28 March 2026 00:30:34 +0000 (0:00:00.773) 0:05:21.464 ******** 2026-03-28 00:30:42.036897 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:30:42.036908 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:30:42.036919 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:30:42.036932 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:30:42.036945 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:30:42.036986 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:30:42.036998 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:30:42.037011 | orchestrator | 2026-03-28 00:30:42.037040 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-03-28 00:30:42.037053 | orchestrator | Saturday 28 March 2026 00:30:34 +0000 (0:00:00.343) 0:05:21.808 ******** 2026-03-28 00:30:42.037099 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:30:42.037111 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:30:42.037124 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:30:42.037137 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:30:42.037149 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:30:42.037162 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:30:42.037174 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:30:42.037187 | orchestrator | 2026-03-28 00:30:42.037199 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-03-28 00:30:42.037211 | orchestrator | Saturday 28 March 2026 00:30:34 +0000 (0:00:00.456) 0:05:22.264 ******** 2026-03-28 00:30:42.037224 | orchestrator | ok: [testbed-manager] 2026-03-28 00:30:42.037237 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:30:42.037249 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:30:42.037262 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:30:42.037275 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:30:42.037288 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:30:42.037298 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:30:42.037309 | orchestrator | 2026-03-28 00:30:42.037319 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-03-28 00:30:42.037330 | orchestrator | Saturday 28 March 2026 00:30:35 +0000 (0:00:00.466) 0:05:22.731 ******** 2026-03-28 00:30:42.037341 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:30:42.037352 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:30:42.037363 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:30:42.037373 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:30:42.037384 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:30:42.037395 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:30:42.037405 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:30:42.037416 | orchestrator | 2026-03-28 00:30:42.037427 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-03-28 00:30:42.037438 | orchestrator | Saturday 28 March 2026 00:30:35 +0000 (0:00:00.296) 0:05:23.028 ******** 2026-03-28 00:30:42.037449 | orchestrator | ok: [testbed-manager] 2026-03-28 00:30:42.037460 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:30:42.037470 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:30:42.037481 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:30:42.037492 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:30:42.037502 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:30:42.037513 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:30:42.037523 | orchestrator | 2026-03-28 00:30:42.037534 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-03-28 00:30:42.037545 | orchestrator | Saturday 28 March 2026 00:30:36 +0000 (0:00:00.337) 0:05:23.366 ******** 2026-03-28 00:30:42.037556 | orchestrator | ok: [testbed-manager] =>  2026-03-28 00:30:42.037567 | orchestrator |  docker_version: 5:27.5.1 2026-03-28 00:30:42.037578 | orchestrator | ok: [testbed-node-0] =>  2026-03-28 00:30:42.037588 | orchestrator |  docker_version: 5:27.5.1 2026-03-28 00:30:42.037599 | orchestrator | ok: [testbed-node-1] =>  2026-03-28 00:30:42.037610 | orchestrator |  docker_version: 5:27.5.1 2026-03-28 00:30:42.037620 | orchestrator | ok: [testbed-node-2] =>  2026-03-28 00:30:42.037631 | orchestrator |  docker_version: 5:27.5.1 2026-03-28 00:30:42.037657 | orchestrator | ok: [testbed-node-3] =>  2026-03-28 00:30:42.037669 | orchestrator |  docker_version: 5:27.5.1 2026-03-28 00:30:42.037680 | orchestrator | ok: [testbed-node-4] =>  2026-03-28 00:30:42.037690 | orchestrator |  docker_version: 5:27.5.1 2026-03-28 00:30:42.037701 | orchestrator | ok: [testbed-node-5] =>  2026-03-28 00:30:42.037712 | orchestrator |  docker_version: 5:27.5.1 2026-03-28 00:30:42.037730 | orchestrator | 2026-03-28 00:30:42.037741 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-03-28 00:30:42.037752 | orchestrator | Saturday 28 March 2026 00:30:36 +0000 (0:00:00.319) 0:05:23.685 ******** 2026-03-28 00:30:42.037763 | orchestrator | ok: [testbed-manager] =>  2026-03-28 00:30:42.037773 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-28 00:30:42.037784 | orchestrator | ok: [testbed-node-0] =>  2026-03-28 00:30:42.037794 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-28 00:30:42.037805 | orchestrator | ok: [testbed-node-1] =>  2026-03-28 00:30:42.037822 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-28 00:30:42.037840 | orchestrator | ok: [testbed-node-2] =>  2026-03-28 00:30:42.037857 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-28 00:30:42.037875 | orchestrator | ok: [testbed-node-3] =>  2026-03-28 00:30:42.037894 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-28 00:30:42.037912 | orchestrator | ok: [testbed-node-4] =>  2026-03-28 00:30:42.037923 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-28 00:30:42.037934 | orchestrator | ok: [testbed-node-5] =>  2026-03-28 00:30:42.037945 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-28 00:30:42.037955 | orchestrator | 2026-03-28 00:30:42.037966 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-03-28 00:30:42.037977 | orchestrator | Saturday 28 March 2026 00:30:36 +0000 (0:00:00.311) 0:05:23.996 ******** 2026-03-28 00:30:42.037988 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:30:42.037998 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:30:42.038009 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:30:42.038113 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:30:42.038129 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:30:42.038140 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:30:42.038151 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:30:42.038162 | orchestrator | 2026-03-28 00:30:42.038173 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-03-28 00:30:42.038184 | orchestrator | Saturday 28 March 2026 00:30:36 +0000 (0:00:00.308) 0:05:24.305 ******** 2026-03-28 00:30:42.038195 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:30:42.038205 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:30:42.038216 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:30:42.038226 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:30:42.038237 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:30:42.038248 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:30:42.038259 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:30:42.038269 | orchestrator | 2026-03-28 00:30:42.038280 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-03-28 00:30:42.038291 | orchestrator | Saturday 28 March 2026 00:30:37 +0000 (0:00:00.268) 0:05:24.574 ******** 2026-03-28 00:30:42.038310 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:30:42.038324 | orchestrator | 2026-03-28 00:30:42.038335 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-03-28 00:30:42.038346 | orchestrator | Saturday 28 March 2026 00:30:37 +0000 (0:00:00.465) 0:05:25.039 ******** 2026-03-28 00:30:42.038357 | orchestrator | ok: [testbed-manager] 2026-03-28 00:30:42.038368 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:30:42.038379 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:30:42.038389 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:30:42.038400 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:30:42.038411 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:30:42.038421 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:30:42.038432 | orchestrator | 2026-03-28 00:30:42.038443 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-03-28 00:30:42.038454 | orchestrator | Saturday 28 March 2026 00:30:38 +0000 (0:00:00.856) 0:05:25.896 ******** 2026-03-28 00:30:42.038473 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:30:42.038484 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:30:42.038495 | orchestrator | ok: [testbed-manager] 2026-03-28 00:30:42.038505 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:30:42.038516 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:30:42.038527 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:30:42.038537 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:30:42.038548 | orchestrator | 2026-03-28 00:30:42.038559 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-03-28 00:30:42.038571 | orchestrator | Saturday 28 March 2026 00:30:41 +0000 (0:00:03.079) 0:05:28.975 ******** 2026-03-28 00:30:42.038582 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-03-28 00:30:42.038593 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-03-28 00:30:42.038604 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-03-28 00:30:42.038615 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-03-28 00:30:42.038625 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-03-28 00:30:42.038636 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:30:42.038647 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-03-28 00:30:42.038657 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-03-28 00:30:42.038668 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-03-28 00:30:42.038679 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-03-28 00:30:42.038690 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:30:42.038700 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-03-28 00:30:42.038711 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-03-28 00:30:42.038722 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-03-28 00:30:42.038732 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:30:42.038743 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-03-28 00:30:42.038763 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-03-28 00:31:44.058512 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-03-28 00:31:44.058603 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:31:44.058612 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-03-28 00:31:44.058620 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-03-28 00:31:44.058626 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-03-28 00:31:44.058633 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:31:44.058639 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:31:44.058645 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-03-28 00:31:44.058651 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-03-28 00:31:44.058657 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-03-28 00:31:44.058663 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:31:44.058668 | orchestrator | 2026-03-28 00:31:44.058675 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-03-28 00:31:44.058682 | orchestrator | Saturday 28 March 2026 00:30:42 +0000 (0:00:00.643) 0:05:29.619 ******** 2026-03-28 00:31:44.058688 | orchestrator | ok: [testbed-manager] 2026-03-28 00:31:44.058694 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:31:44.058700 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:31:44.058706 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:31:44.058712 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:31:44.058718 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:31:44.058724 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:31:44.058729 | orchestrator | 2026-03-28 00:31:44.058735 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-03-28 00:31:44.058741 | orchestrator | Saturday 28 March 2026 00:30:49 +0000 (0:00:07.339) 0:05:36.959 ******** 2026-03-28 00:31:44.058747 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:31:44.058773 | orchestrator | ok: [testbed-manager] 2026-03-28 00:31:44.058779 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:31:44.058784 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:31:44.058790 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:31:44.058796 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:31:44.058802 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:31:44.058807 | orchestrator | 2026-03-28 00:31:44.058813 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-03-28 00:31:44.058819 | orchestrator | Saturday 28 March 2026 00:30:50 +0000 (0:00:01.124) 0:05:38.084 ******** 2026-03-28 00:31:44.058825 | orchestrator | ok: [testbed-manager] 2026-03-28 00:31:44.058831 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:31:44.058836 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:31:44.058842 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:31:44.058847 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:31:44.058853 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:31:44.058859 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:31:44.058865 | orchestrator | 2026-03-28 00:31:44.058871 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-03-28 00:31:44.058877 | orchestrator | Saturday 28 March 2026 00:30:58 +0000 (0:00:08.177) 0:05:46.261 ******** 2026-03-28 00:31:44.058882 | orchestrator | changed: [testbed-manager] 2026-03-28 00:31:44.058901 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:31:44.058906 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:31:44.058912 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:31:44.058917 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:31:44.058922 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:31:44.058927 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:31:44.058933 | orchestrator | 2026-03-28 00:31:44.058939 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-03-28 00:31:44.058945 | orchestrator | Saturday 28 March 2026 00:31:02 +0000 (0:00:03.573) 0:05:49.835 ******** 2026-03-28 00:31:44.058951 | orchestrator | ok: [testbed-manager] 2026-03-28 00:31:44.058956 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:31:44.058961 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:31:44.058967 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:31:44.058972 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:31:44.058977 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:31:44.058983 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:31:44.058988 | orchestrator | 2026-03-28 00:31:44.058994 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-03-28 00:31:44.058999 | orchestrator | Saturday 28 March 2026 00:31:03 +0000 (0:00:01.391) 0:05:51.226 ******** 2026-03-28 00:31:44.059004 | orchestrator | ok: [testbed-manager] 2026-03-28 00:31:44.059010 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:31:44.059015 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:31:44.059020 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:31:44.059026 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:31:44.059052 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:31:44.059058 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:31:44.059065 | orchestrator | 2026-03-28 00:31:44.059071 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-03-28 00:31:44.059077 | orchestrator | Saturday 28 March 2026 00:31:05 +0000 (0:00:01.370) 0:05:52.597 ******** 2026-03-28 00:31:44.059083 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:31:44.059089 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:31:44.059094 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:31:44.059100 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:31:44.059106 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:31:44.059111 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:31:44.059117 | orchestrator | changed: [testbed-manager] 2026-03-28 00:31:44.059123 | orchestrator | 2026-03-28 00:31:44.059128 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-03-28 00:31:44.059140 | orchestrator | Saturday 28 March 2026 00:31:05 +0000 (0:00:00.600) 0:05:53.198 ******** 2026-03-28 00:31:44.059146 | orchestrator | ok: [testbed-manager] 2026-03-28 00:31:44.059152 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:31:44.059158 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:31:44.059164 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:31:44.059169 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:31:44.059175 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:31:44.059180 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:31:44.059186 | orchestrator | 2026-03-28 00:31:44.059192 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-03-28 00:31:44.059212 | orchestrator | Saturday 28 March 2026 00:31:15 +0000 (0:00:09.461) 0:06:02.660 ******** 2026-03-28 00:31:44.059218 | orchestrator | changed: [testbed-manager] 2026-03-28 00:31:44.059224 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:31:44.059229 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:31:44.059235 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:31:44.059241 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:31:44.059247 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:31:44.059253 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:31:44.059258 | orchestrator | 2026-03-28 00:31:44.059264 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-03-28 00:31:44.059270 | orchestrator | Saturday 28 March 2026 00:31:16 +0000 (0:00:01.182) 0:06:03.842 ******** 2026-03-28 00:31:44.059276 | orchestrator | ok: [testbed-manager] 2026-03-28 00:31:44.059282 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:31:44.059288 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:31:44.059294 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:31:44.059300 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:31:44.059305 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:31:44.059311 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:31:44.059316 | orchestrator | 2026-03-28 00:31:44.059323 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-03-28 00:31:44.059329 | orchestrator | Saturday 28 March 2026 00:31:26 +0000 (0:00:09.574) 0:06:13.417 ******** 2026-03-28 00:31:44.059336 | orchestrator | ok: [testbed-manager] 2026-03-28 00:31:44.059342 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:31:44.059347 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:31:44.059352 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:31:44.059358 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:31:44.059363 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:31:44.059369 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:31:44.059374 | orchestrator | 2026-03-28 00:31:44.059380 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-03-28 00:31:44.059385 | orchestrator | Saturday 28 March 2026 00:31:37 +0000 (0:00:11.343) 0:06:24.760 ******** 2026-03-28 00:31:44.059391 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-03-28 00:31:44.059397 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-03-28 00:31:44.059402 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-03-28 00:31:44.059407 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-03-28 00:31:44.059413 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-03-28 00:31:44.059419 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-03-28 00:31:44.059424 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-03-28 00:31:44.059429 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-03-28 00:31:44.059435 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-03-28 00:31:44.059440 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-03-28 00:31:44.059446 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-03-28 00:31:44.059451 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-03-28 00:31:44.059456 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-03-28 00:31:44.059462 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-03-28 00:31:44.059471 | orchestrator | 2026-03-28 00:31:44.059477 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-03-28 00:31:44.059482 | orchestrator | Saturday 28 March 2026 00:31:38 +0000 (0:00:01.264) 0:06:26.025 ******** 2026-03-28 00:31:44.059488 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:31:44.059494 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:31:44.059501 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:31:44.059506 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:31:44.059512 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:31:44.059517 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:31:44.059523 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:31:44.059528 | orchestrator | 2026-03-28 00:31:44.059533 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-03-28 00:31:44.059572 | orchestrator | Saturday 28 March 2026 00:31:39 +0000 (0:00:00.706) 0:06:26.731 ******** 2026-03-28 00:31:44.059578 | orchestrator | ok: [testbed-manager] 2026-03-28 00:31:44.059584 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:31:44.059589 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:31:44.059595 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:31:44.059600 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:31:44.059606 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:31:44.059611 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:31:44.059617 | orchestrator | 2026-03-28 00:31:44.059622 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-03-28 00:31:44.059629 | orchestrator | Saturday 28 March 2026 00:31:43 +0000 (0:00:03.836) 0:06:30.568 ******** 2026-03-28 00:31:44.059635 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:31:44.059640 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:31:44.059646 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:31:44.059651 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:31:44.059657 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:31:44.059662 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:31:44.059668 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:31:44.059673 | orchestrator | 2026-03-28 00:31:44.059680 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-03-28 00:31:44.059685 | orchestrator | Saturday 28 March 2026 00:31:43 +0000 (0:00:00.546) 0:06:31.114 ******** 2026-03-28 00:31:44.059691 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-03-28 00:31:44.059697 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-03-28 00:31:44.059702 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:31:44.059709 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-03-28 00:31:44.059715 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-03-28 00:31:44.059721 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:31:44.059727 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-03-28 00:31:44.059732 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-03-28 00:31:44.059737 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:31:44.059748 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-03-28 00:32:04.065706 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-03-28 00:32:04.065800 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:32:04.065810 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-03-28 00:32:04.065818 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-03-28 00:32:04.065825 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:32:04.065832 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-03-28 00:32:04.065839 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-03-28 00:32:04.065846 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:32:04.065853 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-03-28 00:32:04.065883 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-03-28 00:32:04.065890 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:32:04.065897 | orchestrator | 2026-03-28 00:32:04.065905 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-03-28 00:32:04.065913 | orchestrator | Saturday 28 March 2026 00:31:44 +0000 (0:00:00.624) 0:06:31.739 ******** 2026-03-28 00:32:04.065920 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:32:04.065927 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:32:04.065934 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:32:04.065940 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:32:04.065947 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:32:04.065954 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:32:04.065960 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:32:04.065967 | orchestrator | 2026-03-28 00:32:04.065973 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-03-28 00:32:04.065980 | orchestrator | Saturday 28 March 2026 00:31:44 +0000 (0:00:00.549) 0:06:32.288 ******** 2026-03-28 00:32:04.065987 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:32:04.065994 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:32:04.066000 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:32:04.066007 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:32:04.066100 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:32:04.066111 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:32:04.066118 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:32:04.066124 | orchestrator | 2026-03-28 00:32:04.066131 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-03-28 00:32:04.066138 | orchestrator | Saturday 28 March 2026 00:31:45 +0000 (0:00:00.735) 0:06:33.023 ******** 2026-03-28 00:32:04.066145 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:32:04.066151 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:32:04.066158 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:32:04.066164 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:32:04.066171 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:32:04.066177 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:32:04.066184 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:32:04.066190 | orchestrator | 2026-03-28 00:32:04.066197 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-03-28 00:32:04.066216 | orchestrator | Saturday 28 March 2026 00:31:46 +0000 (0:00:00.611) 0:06:33.635 ******** 2026-03-28 00:32:04.066223 | orchestrator | ok: [testbed-manager] 2026-03-28 00:32:04.066230 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:32:04.066237 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:32:04.066243 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:32:04.066250 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:32:04.066256 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:32:04.066264 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:32:04.066272 | orchestrator | 2026-03-28 00:32:04.066280 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-03-28 00:32:04.066287 | orchestrator | Saturday 28 March 2026 00:31:48 +0000 (0:00:01.976) 0:06:35.612 ******** 2026-03-28 00:32:04.066297 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:32:04.066307 | orchestrator | 2026-03-28 00:32:04.066315 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-03-28 00:32:04.066323 | orchestrator | Saturday 28 March 2026 00:31:49 +0000 (0:00:00.904) 0:06:36.516 ******** 2026-03-28 00:32:04.066331 | orchestrator | ok: [testbed-manager] 2026-03-28 00:32:04.066339 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:32:04.066348 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:32:04.066355 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:32:04.066364 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:32:04.066379 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:32:04.066387 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:32:04.066393 | orchestrator | 2026-03-28 00:32:04.066400 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-03-28 00:32:04.066407 | orchestrator | Saturday 28 March 2026 00:31:50 +0000 (0:00:01.075) 0:06:37.592 ******** 2026-03-28 00:32:04.066413 | orchestrator | ok: [testbed-manager] 2026-03-28 00:32:04.066420 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:32:04.066426 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:32:04.066433 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:32:04.066439 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:32:04.066446 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:32:04.066452 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:32:04.066459 | orchestrator | 2026-03-28 00:32:04.066465 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-03-28 00:32:04.066472 | orchestrator | Saturday 28 March 2026 00:31:51 +0000 (0:00:00.856) 0:06:38.448 ******** 2026-03-28 00:32:04.066478 | orchestrator | ok: [testbed-manager] 2026-03-28 00:32:04.066485 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:32:04.066491 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:32:04.066498 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:32:04.066504 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:32:04.066511 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:32:04.066517 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:32:04.066524 | orchestrator | 2026-03-28 00:32:04.066530 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-03-28 00:32:04.066550 | orchestrator | Saturday 28 March 2026 00:31:52 +0000 (0:00:01.341) 0:06:39.790 ******** 2026-03-28 00:32:04.066557 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:32:04.066563 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:32:04.066570 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:32:04.066576 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:32:04.066583 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:32:04.066589 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:32:04.066596 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:32:04.066602 | orchestrator | 2026-03-28 00:32:04.066609 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-03-28 00:32:04.066616 | orchestrator | Saturday 28 March 2026 00:31:53 +0000 (0:00:01.403) 0:06:41.194 ******** 2026-03-28 00:32:04.066622 | orchestrator | ok: [testbed-manager] 2026-03-28 00:32:04.066629 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:32:04.066636 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:32:04.066642 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:32:04.066649 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:32:04.066655 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:32:04.066662 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:32:04.066668 | orchestrator | 2026-03-28 00:32:04.066675 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-03-28 00:32:04.066682 | orchestrator | Saturday 28 March 2026 00:31:55 +0000 (0:00:01.288) 0:06:42.482 ******** 2026-03-28 00:32:04.066688 | orchestrator | changed: [testbed-manager] 2026-03-28 00:32:04.066695 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:32:04.066701 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:32:04.066708 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:32:04.066714 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:32:04.066721 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:32:04.066728 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:32:04.066734 | orchestrator | 2026-03-28 00:32:04.066741 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-03-28 00:32:04.066747 | orchestrator | Saturday 28 March 2026 00:31:56 +0000 (0:00:01.741) 0:06:44.224 ******** 2026-03-28 00:32:04.066754 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:32:04.066770 | orchestrator | 2026-03-28 00:32:04.066777 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-03-28 00:32:04.066783 | orchestrator | Saturday 28 March 2026 00:31:57 +0000 (0:00:00.918) 0:06:45.143 ******** 2026-03-28 00:32:04.066790 | orchestrator | ok: [testbed-manager] 2026-03-28 00:32:04.066796 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:32:04.066803 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:32:04.066811 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:32:04.066822 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:32:04.066832 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:32:04.066842 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:32:04.066853 | orchestrator | 2026-03-28 00:32:04.066868 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-03-28 00:32:04.066883 | orchestrator | Saturday 28 March 2026 00:31:59 +0000 (0:00:01.398) 0:06:46.542 ******** 2026-03-28 00:32:04.066896 | orchestrator | ok: [testbed-manager] 2026-03-28 00:32:04.066906 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:32:04.066916 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:32:04.066927 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:32:04.066936 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:32:04.066946 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:32:04.066956 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:32:04.066965 | orchestrator | 2026-03-28 00:32:04.066977 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-03-28 00:32:04.066989 | orchestrator | Saturday 28 March 2026 00:32:00 +0000 (0:00:01.389) 0:06:47.931 ******** 2026-03-28 00:32:04.066999 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:32:04.067011 | orchestrator | ok: [testbed-manager] 2026-03-28 00:32:04.067047 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:32:04.067059 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:32:04.067071 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:32:04.067083 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:32:04.067095 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:32:04.067106 | orchestrator | 2026-03-28 00:32:04.067119 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-03-28 00:32:04.067130 | orchestrator | Saturday 28 March 2026 00:32:01 +0000 (0:00:01.097) 0:06:49.028 ******** 2026-03-28 00:32:04.067143 | orchestrator | ok: [testbed-manager] 2026-03-28 00:32:04.067154 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:32:04.067166 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:32:04.067178 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:32:04.067190 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:32:04.067202 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:32:04.067214 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:32:04.067226 | orchestrator | 2026-03-28 00:32:04.067237 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-03-28 00:32:04.067249 | orchestrator | Saturday 28 March 2026 00:32:02 +0000 (0:00:01.108) 0:06:50.137 ******** 2026-03-28 00:32:04.067259 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:32:04.067270 | orchestrator | 2026-03-28 00:32:04.067282 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-28 00:32:04.067294 | orchestrator | Saturday 28 March 2026 00:32:03 +0000 (0:00:00.944) 0:06:51.081 ******** 2026-03-28 00:32:04.067305 | orchestrator | 2026-03-28 00:32:04.067317 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-28 00:32:04.067328 | orchestrator | Saturday 28 March 2026 00:32:03 +0000 (0:00:00.042) 0:06:51.124 ******** 2026-03-28 00:32:04.067339 | orchestrator | 2026-03-28 00:32:04.067349 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-28 00:32:04.067360 | orchestrator | Saturday 28 March 2026 00:32:04 +0000 (0:00:00.223) 0:06:51.348 ******** 2026-03-28 00:32:04.067384 | orchestrator | 2026-03-28 00:32:04.067397 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-28 00:32:04.067420 | orchestrator | Saturday 28 March 2026 00:32:04 +0000 (0:00:00.042) 0:06:51.390 ******** 2026-03-28 00:32:31.191624 | orchestrator | 2026-03-28 00:32:31.191749 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-28 00:32:31.191771 | orchestrator | Saturday 28 March 2026 00:32:04 +0000 (0:00:00.042) 0:06:51.433 ******** 2026-03-28 00:32:31.191787 | orchestrator | 2026-03-28 00:32:31.191804 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-28 00:32:31.191821 | orchestrator | Saturday 28 March 2026 00:32:04 +0000 (0:00:00.050) 0:06:51.484 ******** 2026-03-28 00:32:31.191837 | orchestrator | 2026-03-28 00:32:31.191854 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-28 00:32:31.191870 | orchestrator | Saturday 28 March 2026 00:32:04 +0000 (0:00:00.041) 0:06:51.526 ******** 2026-03-28 00:32:31.191887 | orchestrator | 2026-03-28 00:32:31.191903 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-28 00:32:31.191919 | orchestrator | Saturday 28 March 2026 00:32:04 +0000 (0:00:00.041) 0:06:51.567 ******** 2026-03-28 00:32:31.191936 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:32:31.191953 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:32:31.191969 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:32:31.191985 | orchestrator | 2026-03-28 00:32:31.192001 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-03-28 00:32:31.192046 | orchestrator | Saturday 28 March 2026 00:32:05 +0000 (0:00:01.170) 0:06:52.738 ******** 2026-03-28 00:32:31.192063 | orchestrator | changed: [testbed-manager] 2026-03-28 00:32:31.192081 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:32:31.192097 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:32:31.192114 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:32:31.192129 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:32:31.192146 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:32:31.192163 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:32:31.192180 | orchestrator | 2026-03-28 00:32:31.192197 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-03-28 00:32:31.192214 | orchestrator | Saturday 28 March 2026 00:32:06 +0000 (0:00:01.278) 0:06:54.016 ******** 2026-03-28 00:32:31.192231 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:32:31.192248 | orchestrator | changed: [testbed-manager] 2026-03-28 00:32:31.192264 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:32:31.192279 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:32:31.192293 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:32:31.192308 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:32:31.192323 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:32:31.192338 | orchestrator | 2026-03-28 00:32:31.192355 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-03-28 00:32:31.192371 | orchestrator | Saturday 28 March 2026 00:32:07 +0000 (0:00:01.165) 0:06:55.182 ******** 2026-03-28 00:32:31.192387 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:32:31.192404 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:32:31.192419 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:32:31.192434 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:32:31.192449 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:32:31.192465 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:32:31.192500 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:32:31.192515 | orchestrator | 2026-03-28 00:32:31.192531 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-03-28 00:32:31.192548 | orchestrator | Saturday 28 March 2026 00:32:10 +0000 (0:00:02.435) 0:06:57.617 ******** 2026-03-28 00:32:31.192563 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:32:31.192579 | orchestrator | 2026-03-28 00:32:31.192593 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-03-28 00:32:31.192609 | orchestrator | Saturday 28 March 2026 00:32:10 +0000 (0:00:00.095) 0:06:57.713 ******** 2026-03-28 00:32:31.192658 | orchestrator | ok: [testbed-manager] 2026-03-28 00:32:31.192676 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:32:31.192692 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:32:31.192707 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:32:31.192723 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:32:31.192739 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:32:31.192754 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:32:31.192770 | orchestrator | 2026-03-28 00:32:31.192786 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-03-28 00:32:31.192804 | orchestrator | Saturday 28 March 2026 00:32:11 +0000 (0:00:01.247) 0:06:58.960 ******** 2026-03-28 00:32:31.192819 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:32:31.192834 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:32:31.192850 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:32:31.192865 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:32:31.192882 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:32:31.192898 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:32:31.192914 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:32:31.192930 | orchestrator | 2026-03-28 00:32:31.192947 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-03-28 00:32:31.192963 | orchestrator | Saturday 28 March 2026 00:32:12 +0000 (0:00:00.595) 0:06:59.556 ******** 2026-03-28 00:32:31.192982 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:32:31.193001 | orchestrator | 2026-03-28 00:32:31.193049 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-03-28 00:32:31.193066 | orchestrator | Saturday 28 March 2026 00:32:13 +0000 (0:00:00.927) 0:07:00.483 ******** 2026-03-28 00:32:31.193083 | orchestrator | ok: [testbed-manager] 2026-03-28 00:32:31.193100 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:32:31.193117 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:32:31.193133 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:32:31.193150 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:32:31.193167 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:32:31.193182 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:32:31.193200 | orchestrator | 2026-03-28 00:32:31.193217 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-03-28 00:32:31.193234 | orchestrator | Saturday 28 March 2026 00:32:14 +0000 (0:00:01.017) 0:07:01.501 ******** 2026-03-28 00:32:31.193251 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-03-28 00:32:31.193294 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-03-28 00:32:31.193312 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-03-28 00:32:31.193329 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-03-28 00:32:31.193344 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-03-28 00:32:31.193360 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-03-28 00:32:31.193376 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-03-28 00:32:31.193392 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-03-28 00:32:31.193409 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-03-28 00:32:31.193426 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-03-28 00:32:31.193442 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-03-28 00:32:31.193459 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-03-28 00:32:31.193474 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-03-28 00:32:31.193489 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-03-28 00:32:31.193505 | orchestrator | 2026-03-28 00:32:31.193522 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-03-28 00:32:31.193556 | orchestrator | Saturday 28 March 2026 00:32:16 +0000 (0:00:02.656) 0:07:04.157 ******** 2026-03-28 00:32:31.193574 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:32:31.193592 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:32:31.193610 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:32:31.193628 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:32:31.193645 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:32:31.193663 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:32:31.193680 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:32:31.193698 | orchestrator | 2026-03-28 00:32:31.193715 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-03-28 00:32:31.193733 | orchestrator | Saturday 28 March 2026 00:32:17 +0000 (0:00:00.507) 0:07:04.664 ******** 2026-03-28 00:32:31.193753 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:32:31.193774 | orchestrator | 2026-03-28 00:32:31.193790 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-03-28 00:32:31.193806 | orchestrator | Saturday 28 March 2026 00:32:18 +0000 (0:00:01.018) 0:07:05.682 ******** 2026-03-28 00:32:31.193821 | orchestrator | ok: [testbed-manager] 2026-03-28 00:32:31.193837 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:32:31.193852 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:32:31.193869 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:32:31.193887 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:32:31.193917 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:32:31.193935 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:32:31.193953 | orchestrator | 2026-03-28 00:32:31.193971 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-03-28 00:32:31.193988 | orchestrator | Saturday 28 March 2026 00:32:19 +0000 (0:00:00.871) 0:07:06.554 ******** 2026-03-28 00:32:31.194006 | orchestrator | ok: [testbed-manager] 2026-03-28 00:32:31.194263 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:32:31.194277 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:32:31.194291 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:32:31.194304 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:32:31.194318 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:32:31.194331 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:32:31.194343 | orchestrator | 2026-03-28 00:32:31.194357 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-03-28 00:32:31.194369 | orchestrator | Saturday 28 March 2026 00:32:20 +0000 (0:00:00.847) 0:07:07.402 ******** 2026-03-28 00:32:31.194381 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:32:31.194394 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:32:31.194406 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:32:31.194419 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:32:31.194432 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:32:31.194445 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:32:31.194457 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:32:31.194468 | orchestrator | 2026-03-28 00:32:31.194481 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-03-28 00:32:31.194493 | orchestrator | Saturday 28 March 2026 00:32:20 +0000 (0:00:00.520) 0:07:07.922 ******** 2026-03-28 00:32:31.194506 | orchestrator | ok: [testbed-manager] 2026-03-28 00:32:31.194518 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:32:31.194531 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:32:31.194544 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:32:31.194557 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:32:31.194570 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:32:31.194582 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:32:31.194593 | orchestrator | 2026-03-28 00:32:31.194606 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-03-28 00:32:31.194618 | orchestrator | Saturday 28 March 2026 00:32:22 +0000 (0:00:01.538) 0:07:09.461 ******** 2026-03-28 00:32:31.194647 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:32:31.194659 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:32:31.194671 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:32:31.194684 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:32:31.194698 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:32:31.194711 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:32:31.194725 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:32:31.194739 | orchestrator | 2026-03-28 00:32:31.194751 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-03-28 00:32:31.194764 | orchestrator | Saturday 28 March 2026 00:32:22 +0000 (0:00:00.737) 0:07:10.198 ******** 2026-03-28 00:32:31.194777 | orchestrator | ok: [testbed-manager] 2026-03-28 00:32:31.194789 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:32:31.194802 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:32:31.194815 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:32:31.194829 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:32:31.194843 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:32:31.194875 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:33:04.139209 | orchestrator | 2026-03-28 00:33:04.139348 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-03-28 00:33:04.139374 | orchestrator | Saturday 28 March 2026 00:32:31 +0000 (0:00:08.392) 0:07:18.590 ******** 2026-03-28 00:33:04.139392 | orchestrator | ok: [testbed-manager] 2026-03-28 00:33:04.139411 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:33:04.139429 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:33:04.139445 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:33:04.139461 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:33:04.139479 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:33:04.139496 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:33:04.139513 | orchestrator | 2026-03-28 00:33:04.139530 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-03-28 00:33:04.139546 | orchestrator | Saturday 28 March 2026 00:32:32 +0000 (0:00:01.403) 0:07:19.994 ******** 2026-03-28 00:33:04.139562 | orchestrator | ok: [testbed-manager] 2026-03-28 00:33:04.139579 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:33:04.139596 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:33:04.139614 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:33:04.139630 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:33:04.139646 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:33:04.139663 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:33:04.139679 | orchestrator | 2026-03-28 00:33:04.139696 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-03-28 00:33:04.139713 | orchestrator | Saturday 28 March 2026 00:32:34 +0000 (0:00:01.711) 0:07:21.706 ******** 2026-03-28 00:33:04.139730 | orchestrator | ok: [testbed-manager] 2026-03-28 00:33:04.139746 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:33:04.139762 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:33:04.139778 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:33:04.139795 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:33:04.139812 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:33:04.139829 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:33:04.139845 | orchestrator | 2026-03-28 00:33:04.139861 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-28 00:33:04.139878 | orchestrator | Saturday 28 March 2026 00:32:36 +0000 (0:00:01.875) 0:07:23.582 ******** 2026-03-28 00:33:04.139895 | orchestrator | ok: [testbed-manager] 2026-03-28 00:33:04.139912 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:33:04.139929 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:33:04.139945 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:33:04.139961 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:33:04.139978 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:33:04.140018 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:33:04.140036 | orchestrator | 2026-03-28 00:33:04.140052 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-28 00:33:04.140100 | orchestrator | Saturday 28 March 2026 00:32:37 +0000 (0:00:00.907) 0:07:24.489 ******** 2026-03-28 00:33:04.140118 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:33:04.140135 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:33:04.140152 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:33:04.140167 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:33:04.140184 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:33:04.140200 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:33:04.140217 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:33:04.140234 | orchestrator | 2026-03-28 00:33:04.140251 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-03-28 00:33:04.140267 | orchestrator | Saturday 28 March 2026 00:32:38 +0000 (0:00:00.909) 0:07:25.399 ******** 2026-03-28 00:33:04.140284 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:33:04.140300 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:33:04.140317 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:33:04.140334 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:33:04.140350 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:33:04.140366 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:33:04.140382 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:33:04.140399 | orchestrator | 2026-03-28 00:33:04.140416 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-03-28 00:33:04.140433 | orchestrator | Saturday 28 March 2026 00:32:38 +0000 (0:00:00.730) 0:07:26.129 ******** 2026-03-28 00:33:04.140450 | orchestrator | ok: [testbed-manager] 2026-03-28 00:33:04.140466 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:33:04.140482 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:33:04.140498 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:33:04.140515 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:33:04.140532 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:33:04.140549 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:33:04.140565 | orchestrator | 2026-03-28 00:33:04.140581 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-03-28 00:33:04.140598 | orchestrator | Saturday 28 March 2026 00:32:39 +0000 (0:00:00.518) 0:07:26.647 ******** 2026-03-28 00:33:04.140614 | orchestrator | ok: [testbed-manager] 2026-03-28 00:33:04.140632 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:33:04.140648 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:33:04.140684 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:33:04.140701 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:33:04.140718 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:33:04.140734 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:33:04.140751 | orchestrator | 2026-03-28 00:33:04.140768 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-03-28 00:33:04.140784 | orchestrator | Saturday 28 March 2026 00:32:39 +0000 (0:00:00.556) 0:07:27.204 ******** 2026-03-28 00:33:04.140799 | orchestrator | ok: [testbed-manager] 2026-03-28 00:33:04.140815 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:33:04.140832 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:33:04.140848 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:33:04.140865 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:33:04.140880 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:33:04.140898 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:33:04.140914 | orchestrator | 2026-03-28 00:33:04.140930 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-03-28 00:33:04.140945 | orchestrator | Saturday 28 March 2026 00:32:40 +0000 (0:00:00.515) 0:07:27.720 ******** 2026-03-28 00:33:04.140962 | orchestrator | ok: [testbed-manager] 2026-03-28 00:33:04.140977 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:33:04.141061 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:33:04.141079 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:33:04.141095 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:33:04.141111 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:33:04.141128 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:33:04.141145 | orchestrator | 2026-03-28 00:33:04.141203 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-03-28 00:33:04.141223 | orchestrator | Saturday 28 March 2026 00:32:45 +0000 (0:00:05.603) 0:07:33.323 ******** 2026-03-28 00:33:04.141235 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:33:04.141245 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:33:04.141255 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:33:04.141264 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:33:04.141274 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:33:04.141284 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:33:04.141294 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:33:04.141303 | orchestrator | 2026-03-28 00:33:04.141313 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-03-28 00:33:04.141323 | orchestrator | Saturday 28 March 2026 00:32:46 +0000 (0:00:00.814) 0:07:34.137 ******** 2026-03-28 00:33:04.141334 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:33:04.141347 | orchestrator | 2026-03-28 00:33:04.141357 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-03-28 00:33:04.141366 | orchestrator | Saturday 28 March 2026 00:32:47 +0000 (0:00:00.827) 0:07:34.965 ******** 2026-03-28 00:33:04.141376 | orchestrator | ok: [testbed-manager] 2026-03-28 00:33:04.141386 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:33:04.141395 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:33:04.141405 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:33:04.141414 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:33:04.141424 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:33:04.141433 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:33:04.141442 | orchestrator | 2026-03-28 00:33:04.141452 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-03-28 00:33:04.141462 | orchestrator | Saturday 28 March 2026 00:32:49 +0000 (0:00:01.931) 0:07:36.897 ******** 2026-03-28 00:33:04.141471 | orchestrator | ok: [testbed-manager] 2026-03-28 00:33:04.141481 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:33:04.141490 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:33:04.141500 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:33:04.141509 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:33:04.141518 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:33:04.141529 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:33:04.141547 | orchestrator | 2026-03-28 00:33:04.141563 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-03-28 00:33:04.141579 | orchestrator | Saturday 28 March 2026 00:32:50 +0000 (0:00:01.335) 0:07:38.233 ******** 2026-03-28 00:33:04.141595 | orchestrator | ok: [testbed-manager] 2026-03-28 00:33:04.141612 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:33:04.141630 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:33:04.141646 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:33:04.141661 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:33:04.141677 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:33:04.141692 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:33:04.141706 | orchestrator | 2026-03-28 00:33:04.141730 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-03-28 00:33:04.141746 | orchestrator | Saturday 28 March 2026 00:32:51 +0000 (0:00:00.886) 0:07:39.120 ******** 2026-03-28 00:33:04.141762 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-28 00:33:04.141780 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-28 00:33:04.141796 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-28 00:33:04.141813 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-28 00:33:04.141843 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-28 00:33:04.141857 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-28 00:33:04.141867 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-28 00:33:04.141877 | orchestrator | 2026-03-28 00:33:04.141886 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-03-28 00:33:04.141896 | orchestrator | Saturday 28 March 2026 00:32:53 +0000 (0:00:01.732) 0:07:40.852 ******** 2026-03-28 00:33:04.141906 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:33:04.141917 | orchestrator | 2026-03-28 00:33:04.141934 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-03-28 00:33:04.141950 | orchestrator | Saturday 28 March 2026 00:32:54 +0000 (0:00:01.210) 0:07:42.062 ******** 2026-03-28 00:33:04.141966 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:33:04.141980 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:33:04.142118 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:33:04.142135 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:33:04.142152 | orchestrator | changed: [testbed-manager] 2026-03-28 00:33:04.142170 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:33:04.142186 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:33:04.142202 | orchestrator | 2026-03-28 00:33:04.142235 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-03-28 00:33:35.005395 | orchestrator | Saturday 28 March 2026 00:33:04 +0000 (0:00:09.406) 0:07:51.469 ******** 2026-03-28 00:33:35.005507 | orchestrator | ok: [testbed-manager] 2026-03-28 00:33:35.005523 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:33:35.005534 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:33:35.005545 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:33:35.005556 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:33:35.005567 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:33:35.005578 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:33:35.005589 | orchestrator | 2026-03-28 00:33:35.005600 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-03-28 00:33:35.005612 | orchestrator | Saturday 28 March 2026 00:33:05 +0000 (0:00:01.760) 0:07:53.230 ******** 2026-03-28 00:33:35.005623 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:33:35.005634 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:33:35.005645 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:33:35.005656 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:33:35.005666 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:33:35.005677 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:33:35.005688 | orchestrator | 2026-03-28 00:33:35.005699 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-03-28 00:33:35.005710 | orchestrator | Saturday 28 March 2026 00:33:07 +0000 (0:00:01.510) 0:07:54.740 ******** 2026-03-28 00:33:35.005721 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:33:35.005733 | orchestrator | changed: [testbed-manager] 2026-03-28 00:33:35.005744 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:33:35.005755 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:33:35.005765 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:33:35.005776 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:33:35.005787 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:33:35.005797 | orchestrator | 2026-03-28 00:33:35.005808 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-03-28 00:33:35.005819 | orchestrator | 2026-03-28 00:33:35.005830 | orchestrator | TASK [Include hardening role] ************************************************** 2026-03-28 00:33:35.005867 | orchestrator | Saturday 28 March 2026 00:33:08 +0000 (0:00:01.319) 0:07:56.060 ******** 2026-03-28 00:33:35.005879 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:33:35.005889 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:33:35.005906 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:33:35.005927 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:33:35.005946 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:33:35.006092 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:33:35.006119 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:33:35.006137 | orchestrator | 2026-03-28 00:33:35.006156 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-03-28 00:33:35.006176 | orchestrator | 2026-03-28 00:33:35.006197 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-03-28 00:33:35.006217 | orchestrator | Saturday 28 March 2026 00:33:09 +0000 (0:00:00.569) 0:07:56.630 ******** 2026-03-28 00:33:35.006238 | orchestrator | changed: [testbed-manager] 2026-03-28 00:33:35.006252 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:33:35.006263 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:33:35.006274 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:33:35.006300 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:33:35.006312 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:33:35.006322 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:33:35.006333 | orchestrator | 2026-03-28 00:33:35.006344 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-03-28 00:33:35.006357 | orchestrator | Saturday 28 March 2026 00:33:10 +0000 (0:00:01.396) 0:07:58.026 ******** 2026-03-28 00:33:35.006376 | orchestrator | ok: [testbed-manager] 2026-03-28 00:33:35.006403 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:33:35.006422 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:33:35.006438 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:33:35.006455 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:33:35.006488 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:33:35.006505 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:33:35.006523 | orchestrator | 2026-03-28 00:33:35.006541 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-03-28 00:33:35.006561 | orchestrator | Saturday 28 March 2026 00:33:12 +0000 (0:00:01.746) 0:07:59.773 ******** 2026-03-28 00:33:35.006579 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:33:35.006598 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:33:35.006609 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:33:35.006620 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:33:35.006630 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:33:35.006641 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:33:35.006652 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:33:35.006662 | orchestrator | 2026-03-28 00:33:35.006673 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-03-28 00:33:35.006684 | orchestrator | Saturday 28 March 2026 00:33:12 +0000 (0:00:00.516) 0:08:00.290 ******** 2026-03-28 00:33:35.006702 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:33:35.006722 | orchestrator | 2026-03-28 00:33:35.006740 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-03-28 00:33:35.006757 | orchestrator | Saturday 28 March 2026 00:33:13 +0000 (0:00:00.842) 0:08:01.133 ******** 2026-03-28 00:33:35.006776 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:33:35.006796 | orchestrator | 2026-03-28 00:33:35.006813 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-03-28 00:33:35.006831 | orchestrator | Saturday 28 March 2026 00:33:14 +0000 (0:00:01.020) 0:08:02.153 ******** 2026-03-28 00:33:35.006866 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:33:35.006885 | orchestrator | changed: [testbed-manager] 2026-03-28 00:33:35.006903 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:33:35.006920 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:33:35.006932 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:33:35.006943 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:33:35.006954 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:33:35.006992 | orchestrator | 2026-03-28 00:33:35.007028 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-03-28 00:33:35.007039 | orchestrator | Saturday 28 March 2026 00:33:23 +0000 (0:00:08.617) 0:08:10.770 ******** 2026-03-28 00:33:35.007050 | orchestrator | changed: [testbed-manager] 2026-03-28 00:33:35.007061 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:33:35.007071 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:33:35.007082 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:33:35.007093 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:33:35.007104 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:33:35.007115 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:33:35.007126 | orchestrator | 2026-03-28 00:33:35.007137 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-03-28 00:33:35.007148 | orchestrator | Saturday 28 March 2026 00:33:24 +0000 (0:00:00.932) 0:08:11.703 ******** 2026-03-28 00:33:35.007158 | orchestrator | changed: [testbed-manager] 2026-03-28 00:33:35.007170 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:33:35.007189 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:33:35.007217 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:33:35.007237 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:33:35.007254 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:33:35.007270 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:33:35.007286 | orchestrator | 2026-03-28 00:33:35.007303 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-03-28 00:33:35.007321 | orchestrator | Saturday 28 March 2026 00:33:25 +0000 (0:00:01.442) 0:08:13.145 ******** 2026-03-28 00:33:35.007339 | orchestrator | changed: [testbed-manager] 2026-03-28 00:33:35.007355 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:33:35.007372 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:33:35.007388 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:33:35.007405 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:33:35.007421 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:33:35.007436 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:33:35.007452 | orchestrator | 2026-03-28 00:33:35.007469 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-03-28 00:33:35.007486 | orchestrator | Saturday 28 March 2026 00:33:27 +0000 (0:00:01.954) 0:08:15.099 ******** 2026-03-28 00:33:35.007502 | orchestrator | changed: [testbed-manager] 2026-03-28 00:33:35.007520 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:33:35.007537 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:33:35.007556 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:33:35.007572 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:33:35.007590 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:33:35.007608 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:33:35.007624 | orchestrator | 2026-03-28 00:33:35.007641 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-03-28 00:33:35.007658 | orchestrator | Saturday 28 March 2026 00:33:29 +0000 (0:00:01.277) 0:08:16.377 ******** 2026-03-28 00:33:35.007677 | orchestrator | changed: [testbed-manager] 2026-03-28 00:33:35.007696 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:33:35.007715 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:33:35.007734 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:33:35.007764 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:33:35.007783 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:33:35.007803 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:33:35.007822 | orchestrator | 2026-03-28 00:33:35.007857 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-03-28 00:33:35.007870 | orchestrator | 2026-03-28 00:33:35.007881 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-03-28 00:33:35.007892 | orchestrator | Saturday 28 March 2026 00:33:30 +0000 (0:00:01.106) 0:08:17.483 ******** 2026-03-28 00:33:35.007903 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:33:35.007914 | orchestrator | 2026-03-28 00:33:35.007925 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-28 00:33:35.007941 | orchestrator | Saturday 28 March 2026 00:33:31 +0000 (0:00:01.020) 0:08:18.503 ******** 2026-03-28 00:33:35.007987 | orchestrator | ok: [testbed-manager] 2026-03-28 00:33:35.008008 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:33:35.008026 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:33:35.008043 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:33:35.008062 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:33:35.008081 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:33:35.008101 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:33:35.008119 | orchestrator | 2026-03-28 00:33:35.008139 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-28 00:33:35.008151 | orchestrator | Saturday 28 March 2026 00:33:32 +0000 (0:00:00.853) 0:08:19.357 ******** 2026-03-28 00:33:35.008162 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:33:35.008173 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:33:35.008183 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:33:35.008194 | orchestrator | changed: [testbed-manager] 2026-03-28 00:33:35.008205 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:33:35.008215 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:33:35.008226 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:33:35.008236 | orchestrator | 2026-03-28 00:33:35.008247 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-03-28 00:33:35.008258 | orchestrator | Saturday 28 March 2026 00:33:33 +0000 (0:00:01.287) 0:08:20.644 ******** 2026-03-28 00:33:35.008310 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:33:35.008322 | orchestrator | 2026-03-28 00:33:35.008333 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-28 00:33:35.008343 | orchestrator | Saturday 28 March 2026 00:33:34 +0000 (0:00:00.827) 0:08:21.472 ******** 2026-03-28 00:33:35.008354 | orchestrator | ok: [testbed-manager] 2026-03-28 00:33:35.008365 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:33:35.008376 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:33:35.008386 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:33:35.008397 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:33:35.008408 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:33:35.008419 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:33:35.008429 | orchestrator | 2026-03-28 00:33:35.008456 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-28 00:33:36.610924 | orchestrator | Saturday 28 March 2026 00:33:35 +0000 (0:00:00.868) 0:08:22.340 ******** 2026-03-28 00:33:36.611075 | orchestrator | changed: [testbed-manager] 2026-03-28 00:33:36.611098 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:33:36.611110 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:33:36.611121 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:33:36.611132 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:33:36.611143 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:33:36.611154 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:33:36.611164 | orchestrator | 2026-03-28 00:33:36.611177 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:33:36.611189 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-03-28 00:33:36.611230 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-28 00:33:36.611241 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-28 00:33:36.611252 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-28 00:33:36.611263 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-28 00:33:36.611274 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-28 00:33:36.611285 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-28 00:33:36.611295 | orchestrator | 2026-03-28 00:33:36.611306 | orchestrator | 2026-03-28 00:33:36.611317 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:33:36.611329 | orchestrator | Saturday 28 March 2026 00:33:36 +0000 (0:00:01.270) 0:08:23.611 ******** 2026-03-28 00:33:36.611339 | orchestrator | =============================================================================== 2026-03-28 00:33:36.611350 | orchestrator | osism.commons.packages : Install required packages --------------------- 80.77s 2026-03-28 00:33:36.611361 | orchestrator | osism.commons.packages : Download required packages -------------------- 40.84s 2026-03-28 00:33:36.611386 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 35.74s 2026-03-28 00:33:36.611397 | orchestrator | osism.commons.repository : Update package cache ------------------------ 15.12s 2026-03-28 00:33:36.611408 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 13.77s 2026-03-28 00:33:36.611419 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 12.45s 2026-03-28 00:33:36.611430 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.34s 2026-03-28 00:33:36.611441 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.57s 2026-03-28 00:33:36.611451 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.46s 2026-03-28 00:33:36.611462 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.41s 2026-03-28 00:33:36.611473 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.87s 2026-03-28 00:33:36.611484 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.62s 2026-03-28 00:33:36.611495 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.40s 2026-03-28 00:33:36.611505 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 8.39s 2026-03-28 00:33:36.611516 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.20s 2026-03-28 00:33:36.611527 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.18s 2026-03-28 00:33:36.611538 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 7.34s 2026-03-28 00:33:36.611548 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.13s 2026-03-28 00:33:36.611559 | orchestrator | osism.commons.services : Populate service facts ------------------------- 6.04s 2026-03-28 00:33:36.611570 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.69s 2026-03-28 00:33:36.805225 | orchestrator | + osism apply fail2ban 2026-03-28 00:33:48.682066 | orchestrator | 2026-03-28 00:33:48 | INFO  | Prepare task for execution of fail2ban. 2026-03-28 00:33:48.760410 | orchestrator | 2026-03-28 00:33:48 | INFO  | Task a1dfe36c-6cf1-4510-8bfa-e89e63d971b1 (fail2ban) was prepared for execution. 2026-03-28 00:33:48.760516 | orchestrator | 2026-03-28 00:33:48 | INFO  | It takes a moment until task a1dfe36c-6cf1-4510-8bfa-e89e63d971b1 (fail2ban) has been started and output is visible here. 2026-03-28 00:34:10.561806 | orchestrator | 2026-03-28 00:34:10.561912 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-03-28 00:34:10.561992 | orchestrator | 2026-03-28 00:34:10.562006 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-03-28 00:34:10.562058 | orchestrator | Saturday 28 March 2026 00:33:52 +0000 (0:00:00.366) 0:00:00.366 ******** 2026-03-28 00:34:10.562070 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:34:10.562081 | orchestrator | 2026-03-28 00:34:10.562095 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-03-28 00:34:10.562110 | orchestrator | Saturday 28 March 2026 00:33:53 +0000 (0:00:01.222) 0:00:01.589 ******** 2026-03-28 00:34:10.562123 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:34:10.562139 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:34:10.562154 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:34:10.562170 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:34:10.562184 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:34:10.562198 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:34:10.562213 | orchestrator | changed: [testbed-manager] 2026-03-28 00:34:10.562228 | orchestrator | 2026-03-28 00:34:10.562238 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-03-28 00:34:10.562248 | orchestrator | Saturday 28 March 2026 00:34:05 +0000 (0:00:11.654) 0:00:13.243 ******** 2026-03-28 00:34:10.562256 | orchestrator | changed: [testbed-manager] 2026-03-28 00:34:10.562265 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:34:10.562274 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:34:10.562282 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:34:10.562291 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:34:10.562300 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:34:10.562308 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:34:10.562317 | orchestrator | 2026-03-28 00:34:10.562325 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-03-28 00:34:10.562334 | orchestrator | Saturday 28 March 2026 00:34:07 +0000 (0:00:01.689) 0:00:14.933 ******** 2026-03-28 00:34:10.562343 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:34:10.562355 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:34:10.562365 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:34:10.562375 | orchestrator | ok: [testbed-manager] 2026-03-28 00:34:10.562385 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:34:10.562394 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:34:10.562404 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:34:10.562415 | orchestrator | 2026-03-28 00:34:10.562424 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-03-28 00:34:10.562435 | orchestrator | Saturday 28 March 2026 00:34:08 +0000 (0:00:01.299) 0:00:16.233 ******** 2026-03-28 00:34:10.562445 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:34:10.562455 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:34:10.562465 | orchestrator | changed: [testbed-manager] 2026-03-28 00:34:10.562475 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:34:10.562486 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:34:10.562496 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:34:10.562506 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:34:10.562520 | orchestrator | 2026-03-28 00:34:10.562535 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:34:10.562568 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:34:10.562580 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:34:10.562615 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:34:10.562626 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:34:10.562637 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:34:10.562647 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:34:10.562657 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:34:10.562666 | orchestrator | 2026-03-28 00:34:10.562676 | orchestrator | 2026-03-28 00:34:10.562688 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:34:10.562704 | orchestrator | Saturday 28 March 2026 00:34:10 +0000 (0:00:01.764) 0:00:17.998 ******** 2026-03-28 00:34:10.562720 | orchestrator | =============================================================================== 2026-03-28 00:34:10.562736 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.65s 2026-03-28 00:34:10.562750 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.76s 2026-03-28 00:34:10.562764 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.69s 2026-03-28 00:34:10.562780 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.30s 2026-03-28 00:34:10.562795 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.22s 2026-03-28 00:34:10.798674 | orchestrator | + osism apply network 2026-03-28 00:34:22.135618 | orchestrator | 2026-03-28 00:34:22 | INFO  | Prepare task for execution of network. 2026-03-28 00:34:22.217776 | orchestrator | 2026-03-28 00:34:22 | INFO  | Task 447c8729-a0e1-4916-a17a-cce5303b6d8c (network) was prepared for execution. 2026-03-28 00:34:22.217884 | orchestrator | 2026-03-28 00:34:22 | INFO  | It takes a moment until task 447c8729-a0e1-4916-a17a-cce5303b6d8c (network) has been started and output is visible here. 2026-03-28 00:34:52.076500 | orchestrator | 2026-03-28 00:34:52.076681 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-03-28 00:34:52.076702 | orchestrator | 2026-03-28 00:34:52.076715 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-03-28 00:34:52.076727 | orchestrator | Saturday 28 March 2026 00:34:25 +0000 (0:00:00.342) 0:00:00.342 ******** 2026-03-28 00:34:52.076738 | orchestrator | ok: [testbed-manager] 2026-03-28 00:34:52.076751 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:34:52.076762 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:34:52.076773 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:34:52.076784 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:34:52.076795 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:34:52.076805 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:34:52.076816 | orchestrator | 2026-03-28 00:34:52.076827 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-03-28 00:34:52.076838 | orchestrator | Saturday 28 March 2026 00:34:26 +0000 (0:00:00.618) 0:00:00.961 ******** 2026-03-28 00:34:52.076851 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:34:52.076865 | orchestrator | 2026-03-28 00:34:52.076876 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-03-28 00:34:52.076908 | orchestrator | Saturday 28 March 2026 00:34:27 +0000 (0:00:01.238) 0:00:02.199 ******** 2026-03-28 00:34:52.076920 | orchestrator | ok: [testbed-manager] 2026-03-28 00:34:52.076960 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:34:52.076972 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:34:52.076983 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:34:52.076993 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:34:52.077004 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:34:52.077015 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:34:52.077025 | orchestrator | 2026-03-28 00:34:52.077036 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-03-28 00:34:52.077047 | orchestrator | Saturday 28 March 2026 00:34:30 +0000 (0:00:02.498) 0:00:04.697 ******** 2026-03-28 00:34:52.077058 | orchestrator | ok: [testbed-manager] 2026-03-28 00:34:52.077069 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:34:52.077079 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:34:52.077090 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:34:52.077101 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:34:52.077111 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:34:52.077122 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:34:52.077132 | orchestrator | 2026-03-28 00:34:52.077143 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-03-28 00:34:52.077154 | orchestrator | Saturday 28 March 2026 00:34:31 +0000 (0:00:01.569) 0:00:06.267 ******** 2026-03-28 00:34:52.077165 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-03-28 00:34:52.077176 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-03-28 00:34:52.077188 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-03-28 00:34:52.077198 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-03-28 00:34:52.077209 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-03-28 00:34:52.077220 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-03-28 00:34:52.077231 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-03-28 00:34:52.077241 | orchestrator | 2026-03-28 00:34:52.077271 | orchestrator | TASK [osism.commons.network : Write network_netplan_config_template to temporary file] *** 2026-03-28 00:34:52.077284 | orchestrator | Saturday 28 March 2026 00:34:32 +0000 (0:00:01.213) 0:00:07.481 ******** 2026-03-28 00:34:52.077294 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:34:52.077306 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:34:52.077316 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:34:52.077327 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:34:52.077338 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:34:52.077349 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:34:52.077359 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:34:52.077370 | orchestrator | 2026-03-28 00:34:52.077381 | orchestrator | TASK [osism.commons.network : Render netplan configuration from network_netplan_config_template variable] *** 2026-03-28 00:34:52.077393 | orchestrator | Saturday 28 March 2026 00:34:33 +0000 (0:00:00.695) 0:00:08.177 ******** 2026-03-28 00:34:52.077403 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:34:52.077414 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:34:52.077425 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:34:52.077436 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:34:52.077446 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:34:52.077457 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:34:52.077467 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:34:52.077478 | orchestrator | 2026-03-28 00:34:52.077488 | orchestrator | TASK [osism.commons.network : Remove temporary network_netplan_config_template file] *** 2026-03-28 00:34:52.077499 | orchestrator | Saturday 28 March 2026 00:34:34 +0000 (0:00:00.869) 0:00:09.046 ******** 2026-03-28 00:34:52.077510 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:34:52.077521 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:34:52.077531 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:34:52.077542 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:34:52.077552 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:34:52.077563 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:34:52.077573 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:34:52.077592 | orchestrator | 2026-03-28 00:34:52.077603 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-03-28 00:34:52.077614 | orchestrator | Saturday 28 March 2026 00:34:35 +0000 (0:00:00.827) 0:00:09.874 ******** 2026-03-28 00:34:52.077624 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 00:34:52.077635 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 00:34:52.077646 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-28 00:34:52.077656 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-28 00:34:52.077667 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-28 00:34:52.077678 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-28 00:34:52.077688 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-28 00:34:52.077699 | orchestrator | 2026-03-28 00:34:52.077729 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-03-28 00:34:52.077741 | orchestrator | Saturday 28 March 2026 00:34:38 +0000 (0:00:03.504) 0:00:13.379 ******** 2026-03-28 00:34:52.077752 | orchestrator | changed: [testbed-manager] 2026-03-28 00:34:52.077762 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:34:52.077773 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:34:52.077784 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:34:52.077794 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:34:52.077805 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:34:52.077815 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:34:52.077826 | orchestrator | 2026-03-28 00:34:52.077837 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-03-28 00:34:52.077848 | orchestrator | Saturday 28 March 2026 00:34:40 +0000 (0:00:01.625) 0:00:15.004 ******** 2026-03-28 00:34:52.077858 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 00:34:52.077869 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 00:34:52.077880 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-28 00:34:52.077961 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-28 00:34:52.077974 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-28 00:34:52.077984 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-28 00:34:52.077995 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-28 00:34:52.078006 | orchestrator | 2026-03-28 00:34:52.078081 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-03-28 00:34:52.078096 | orchestrator | Saturday 28 March 2026 00:34:42 +0000 (0:00:01.816) 0:00:16.820 ******** 2026-03-28 00:34:52.078107 | orchestrator | ok: [testbed-manager] 2026-03-28 00:34:52.078118 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:34:52.078129 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:34:52.078139 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:34:52.078150 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:34:52.078161 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:34:52.078171 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:34:52.078182 | orchestrator | 2026-03-28 00:34:52.078193 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-03-28 00:34:52.078204 | orchestrator | Saturday 28 March 2026 00:34:43 +0000 (0:00:01.142) 0:00:17.962 ******** 2026-03-28 00:34:52.078215 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:34:52.078225 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:34:52.078236 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:34:52.078247 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:34:52.078257 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:34:52.078268 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:34:52.078279 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:34:52.078290 | orchestrator | 2026-03-28 00:34:52.078301 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-03-28 00:34:52.078311 | orchestrator | Saturday 28 March 2026 00:34:43 +0000 (0:00:00.681) 0:00:18.644 ******** 2026-03-28 00:34:52.078322 | orchestrator | ok: [testbed-manager] 2026-03-28 00:34:52.078333 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:34:52.078343 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:34:52.078363 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:34:52.078374 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:34:52.078392 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:34:52.078403 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:34:52.078413 | orchestrator | 2026-03-28 00:34:52.078424 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-03-28 00:34:52.078435 | orchestrator | Saturday 28 March 2026 00:34:46 +0000 (0:00:02.249) 0:00:20.893 ******** 2026-03-28 00:34:52.078446 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:34:52.078457 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:34:52.078468 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:34:52.078479 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:34:52.078490 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:34:52.078500 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:34:52.078511 | orchestrator | changed: [testbed-manager] => (item={'src': '/opt/configuration/network/iptables.sh', 'dest': 'routable.d/iptables.sh'}) 2026-03-28 00:34:52.078524 | orchestrator | 2026-03-28 00:34:52.078535 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-03-28 00:34:52.078545 | orchestrator | Saturday 28 March 2026 00:34:47 +0000 (0:00:00.962) 0:00:21.856 ******** 2026-03-28 00:34:52.078556 | orchestrator | ok: [testbed-manager] 2026-03-28 00:34:52.078567 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:34:52.078578 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:34:52.078589 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:34:52.078599 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:34:52.078610 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:34:52.078621 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:34:52.078631 | orchestrator | 2026-03-28 00:34:52.078642 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-03-28 00:34:52.078653 | orchestrator | Saturday 28 March 2026 00:34:48 +0000 (0:00:01.686) 0:00:23.543 ******** 2026-03-28 00:34:52.078665 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:34:52.078677 | orchestrator | 2026-03-28 00:34:52.078688 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-28 00:34:52.078699 | orchestrator | Saturday 28 March 2026 00:34:50 +0000 (0:00:01.322) 0:00:24.865 ******** 2026-03-28 00:34:52.078710 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:34:52.078721 | orchestrator | ok: [testbed-manager] 2026-03-28 00:34:52.078732 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:34:52.078742 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:34:52.078753 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:34:52.078764 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:34:52.078775 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:34:52.078786 | orchestrator | 2026-03-28 00:34:52.078797 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-03-28 00:34:52.078807 | orchestrator | Saturday 28 March 2026 00:34:51 +0000 (0:00:01.324) 0:00:26.190 ******** 2026-03-28 00:34:52.078818 | orchestrator | ok: [testbed-manager] 2026-03-28 00:34:52.078829 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:34:52.078840 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:34:52.078850 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:34:52.078861 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:34:52.078882 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:35:09.702968 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:35:09.703072 | orchestrator | 2026-03-28 00:35:09.703089 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-28 00:35:09.703102 | orchestrator | Saturday 28 March 2026 00:34:52 +0000 (0:00:00.708) 0:00:26.898 ******** 2026-03-28 00:35:09.703121 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-03-28 00:35:09.703133 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-03-28 00:35:09.703175 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-03-28 00:35:09.703195 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-03-28 00:35:09.703214 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-28 00:35:09.703234 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-03-28 00:35:09.703254 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-28 00:35:09.703273 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-28 00:35:09.703289 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-28 00:35:09.703307 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-28 00:35:09.703318 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-03-28 00:35:09.703329 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-03-28 00:35:09.703340 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-28 00:35:09.703350 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-28 00:35:09.703361 | orchestrator | 2026-03-28 00:35:09.703372 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-03-28 00:35:09.703383 | orchestrator | Saturday 28 March 2026 00:34:53 +0000 (0:00:01.288) 0:00:28.187 ******** 2026-03-28 00:35:09.703394 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:35:09.703405 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:35:09.703416 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:35:09.703426 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:35:09.703437 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:35:09.703448 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:35:09.703458 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:35:09.703469 | orchestrator | 2026-03-28 00:35:09.703480 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-03-28 00:35:09.703494 | orchestrator | Saturday 28 March 2026 00:34:54 +0000 (0:00:00.697) 0:00:28.884 ******** 2026-03-28 00:35:09.703522 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-1, testbed-node-0, testbed-node-2, testbed-node-4, testbed-node-3, testbed-node-5 2026-03-28 00:35:09.703538 | orchestrator | 2026-03-28 00:35:09.703551 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-03-28 00:35:09.703565 | orchestrator | Saturday 28 March 2026 00:34:58 +0000 (0:00:04.729) 0:00:33.614 ******** 2026-03-28 00:35:09.703578 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.112.5/20']}}) 2026-03-28 00:35:09.703592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-28 00:35:09.703613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-28 00:35:09.703626 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.128.5/20']}}) 2026-03-28 00:35:09.703637 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-28 00:35:09.703657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-28 00:35:09.703684 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.11/20']}}) 2026-03-28 00:35:09.703697 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-28 00:35:09.703709 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.10/20']}}) 2026-03-28 00:35:09.703720 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': []}}) 2026-03-28 00:35:09.703833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.12/20']}}) 2026-03-28 00:35:09.703899 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.13/20']}}) 2026-03-28 00:35:09.703919 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.14/20']}}) 2026-03-28 00:35:09.703944 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': ['192.168.128.15/20']}}) 2026-03-28 00:35:09.703962 | orchestrator | 2026-03-28 00:35:09.703980 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-03-28 00:35:09.703995 | orchestrator | Saturday 28 March 2026 00:35:04 +0000 (0:00:06.038) 0:00:39.652 ******** 2026-03-28 00:35:09.704011 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-28 00:35:09.704027 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.112.5/20']}}) 2026-03-28 00:35:09.704044 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-28 00:35:09.704063 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-28 00:35:09.704094 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-28 00:35:09.704113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.10/20']}}) 2026-03-28 00:35:09.704133 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': []}}) 2026-03-28 00:35:09.704166 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-28 00:35:21.407719 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.128.5/20']}}) 2026-03-28 00:35:21.407834 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.11/20']}}) 2026-03-28 00:35:21.407849 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.13/20']}}) 2026-03-28 00:35:21.407910 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.12/20']}}) 2026-03-28 00:35:21.407924 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': ['192.168.128.15/20']}}) 2026-03-28 00:35:21.407935 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.14/20']}}) 2026-03-28 00:35:21.407962 | orchestrator | 2026-03-28 00:35:21.407975 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-03-28 00:35:21.408003 | orchestrator | Saturday 28 March 2026 00:35:10 +0000 (0:00:05.653) 0:00:45.305 ******** 2026-03-28 00:35:21.408043 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:35:21.408061 | orchestrator | 2026-03-28 00:35:21.408080 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-28 00:35:21.408099 | orchestrator | Saturday 28 March 2026 00:35:12 +0000 (0:00:01.365) 0:00:46.671 ******** 2026-03-28 00:35:21.408118 | orchestrator | ok: [testbed-manager] 2026-03-28 00:35:21.408139 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:35:21.408154 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:35:21.408192 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:35:21.408204 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:35:21.408214 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:35:21.408225 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:35:21.408235 | orchestrator | 2026-03-28 00:35:21.408246 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-28 00:35:21.408259 | orchestrator | Saturday 28 March 2026 00:35:12 +0000 (0:00:00.997) 0:00:47.669 ******** 2026-03-28 00:35:21.408272 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-28 00:35:21.408285 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-28 00:35:21.408299 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-28 00:35:21.408311 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-28 00:35:21.408323 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-28 00:35:21.408336 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-28 00:35:21.408348 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-28 00:35:21.408360 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-28 00:35:21.408372 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:35:21.408385 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-28 00:35:21.408397 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-28 00:35:21.408409 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-28 00:35:21.408421 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-28 00:35:21.408433 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:35:21.408445 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-28 00:35:21.408457 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-28 00:35:21.408470 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-28 00:35:21.408499 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-28 00:35:21.408511 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:35:21.408521 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-28 00:35:21.408532 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-28 00:35:21.408543 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-28 00:35:21.408553 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-28 00:35:21.408564 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:35:21.408575 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-28 00:35:21.408585 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-28 00:35:21.408596 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-28 00:35:21.408606 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-28 00:35:21.408617 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:35:21.408628 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:35:21.408638 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-28 00:35:21.408649 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-28 00:35:21.408660 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-28 00:35:21.408677 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-28 00:35:21.408688 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:35:21.408699 | orchestrator | 2026-03-28 00:35:21.408709 | orchestrator | TASK [osism.commons.network : Include network extra init] ********************** 2026-03-28 00:35:21.408720 | orchestrator | Saturday 28 March 2026 00:35:13 +0000 (0:00:00.950) 0:00:48.620 ******** 2026-03-28 00:35:21.408731 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/network-extra-init.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:35:21.408742 | orchestrator | 2026-03-28 00:35:21.408752 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init script] **************** 2026-03-28 00:35:21.408763 | orchestrator | Saturday 28 March 2026 00:35:15 +0000 (0:00:01.309) 0:00:49.930 ******** 2026-03-28 00:35:21.408780 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:35:21.408792 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:35:21.408803 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:35:21.408813 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:35:21.408824 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:35:21.408835 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:35:21.408846 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:35:21.408856 | orchestrator | 2026-03-28 00:35:21.408890 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init systemd service] ******* 2026-03-28 00:35:21.408901 | orchestrator | Saturday 28 March 2026 00:35:15 +0000 (0:00:00.690) 0:00:50.620 ******** 2026-03-28 00:35:21.408912 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:35:21.408922 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:35:21.408933 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:35:21.408944 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:35:21.408955 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:35:21.408965 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:35:21.408976 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:35:21.408986 | orchestrator | 2026-03-28 00:35:21.408997 | orchestrator | TASK [osism.commons.network : Enable and start network-extra-init service] ***** 2026-03-28 00:35:21.409008 | orchestrator | Saturday 28 March 2026 00:35:16 +0000 (0:00:00.866) 0:00:51.487 ******** 2026-03-28 00:35:21.409018 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:35:21.409029 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:35:21.409040 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:35:21.409050 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:35:21.409061 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:35:21.409071 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:35:21.409082 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:35:21.409092 | orchestrator | 2026-03-28 00:35:21.409103 | orchestrator | TASK [osism.commons.network : Disable and stop network-extra-init service] ***** 2026-03-28 00:35:21.409114 | orchestrator | Saturday 28 March 2026 00:35:17 +0000 (0:00:00.636) 0:00:52.123 ******** 2026-03-28 00:35:21.409125 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:35:21.409135 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:35:21.409146 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:35:21.409157 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:35:21.409168 | orchestrator | ok: [testbed-manager] 2026-03-28 00:35:21.409178 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:35:21.409189 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:35:21.409200 | orchestrator | 2026-03-28 00:35:21.409210 | orchestrator | TASK [osism.commons.network : Remove network-extra-init systemd service] ******* 2026-03-28 00:35:21.409221 | orchestrator | Saturday 28 March 2026 00:35:19 +0000 (0:00:01.815) 0:00:53.939 ******** 2026-03-28 00:35:21.409232 | orchestrator | ok: [testbed-manager] 2026-03-28 00:35:21.409243 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:35:21.409254 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:35:21.409265 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:35:21.409275 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:35:21.409293 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:35:21.409303 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:35:21.409314 | orchestrator | 2026-03-28 00:35:21.409325 | orchestrator | TASK [osism.commons.network : Remove network-extra-init script] **************** 2026-03-28 00:35:21.409336 | orchestrator | Saturday 28 March 2026 00:35:20 +0000 (0:00:01.220) 0:00:55.159 ******** 2026-03-28 00:35:21.409347 | orchestrator | ok: [testbed-manager] 2026-03-28 00:35:21.409357 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:35:21.409368 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:35:21.409378 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:35:21.409389 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:35:21.409399 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:35:21.409416 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:35:24.401245 | orchestrator | 2026-03-28 00:35:24.401357 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-03-28 00:35:24.401374 | orchestrator | Saturday 28 March 2026 00:35:22 +0000 (0:00:02.109) 0:00:57.269 ******** 2026-03-28 00:35:24.401386 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:35:24.401399 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:35:24.401410 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:35:24.401421 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:35:24.401432 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:35:24.401443 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:35:24.401454 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:35:24.401465 | orchestrator | 2026-03-28 00:35:24.401476 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-03-28 00:35:24.401487 | orchestrator | Saturday 28 March 2026 00:35:23 +0000 (0:00:00.842) 0:00:58.112 ******** 2026-03-28 00:35:24.401498 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:35:24.401509 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:35:24.401520 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:35:24.401530 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:35:24.401541 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:35:24.401552 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:35:24.401562 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:35:24.401573 | orchestrator | 2026-03-28 00:35:24.401584 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:35:24.401596 | orchestrator | testbed-manager : ok=25  changed=5  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-28 00:35:24.401609 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-28 00:35:24.401620 | orchestrator | testbed-node-1 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-28 00:35:24.401631 | orchestrator | testbed-node-2 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-28 00:35:24.401642 | orchestrator | testbed-node-3 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-28 00:35:24.401657 | orchestrator | testbed-node-4 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-28 00:35:24.401669 | orchestrator | testbed-node-5 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-28 00:35:24.401680 | orchestrator | 2026-03-28 00:35:24.401691 | orchestrator | 2026-03-28 00:35:24.401702 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:35:24.401713 | orchestrator | Saturday 28 March 2026 00:35:24 +0000 (0:00:00.571) 0:00:58.684 ******** 2026-03-28 00:35:24.401724 | orchestrator | =============================================================================== 2026-03-28 00:35:24.401760 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 6.04s 2026-03-28 00:35:24.401773 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.65s 2026-03-28 00:35:24.401786 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.73s 2026-03-28 00:35:24.401799 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.50s 2026-03-28 00:35:24.401812 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.50s 2026-03-28 00:35:24.401823 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.25s 2026-03-28 00:35:24.401836 | orchestrator | osism.commons.network : Remove network-extra-init script ---------------- 2.11s 2026-03-28 00:35:24.401848 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.82s 2026-03-28 00:35:24.401913 | orchestrator | osism.commons.network : Disable and stop network-extra-init service ----- 1.82s 2026-03-28 00:35:24.401929 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.69s 2026-03-28 00:35:24.401941 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.63s 2026-03-28 00:35:24.401954 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.57s 2026-03-28 00:35:24.401966 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.37s 2026-03-28 00:35:24.401976 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.32s 2026-03-28 00:35:24.401987 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.32s 2026-03-28 00:35:24.401998 | orchestrator | osism.commons.network : Include network extra init ---------------------- 1.31s 2026-03-28 00:35:24.402009 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.29s 2026-03-28 00:35:24.402082 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.24s 2026-03-28 00:35:24.402097 | orchestrator | osism.commons.network : Remove network-extra-init systemd service ------- 1.22s 2026-03-28 00:35:24.402108 | orchestrator | osism.commons.network : Create required directories --------------------- 1.21s 2026-03-28 00:35:24.598538 | orchestrator | + osism apply wireguard 2026-03-28 00:35:35.925023 | orchestrator | 2026-03-28 00:35:35 | INFO  | Prepare task for execution of wireguard. 2026-03-28 00:35:36.014227 | orchestrator | 2026-03-28 00:35:36 | INFO  | Task e34170dd-9958-4fe2-95cc-c5ef081018a9 (wireguard) was prepared for execution. 2026-03-28 00:35:36.014350 | orchestrator | 2026-03-28 00:35:36 | INFO  | It takes a moment until task e34170dd-9958-4fe2-95cc-c5ef081018a9 (wireguard) has been started and output is visible here. 2026-03-28 00:35:56.464289 | orchestrator | 2026-03-28 00:35:56.464391 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-03-28 00:35:56.464404 | orchestrator | 2026-03-28 00:35:56.464413 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-03-28 00:35:56.464423 | orchestrator | Saturday 28 March 2026 00:35:39 +0000 (0:00:00.297) 0:00:00.297 ******** 2026-03-28 00:35:56.464433 | orchestrator | ok: [testbed-manager] 2026-03-28 00:35:56.464441 | orchestrator | 2026-03-28 00:35:56.464447 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-03-28 00:35:56.464454 | orchestrator | Saturday 28 March 2026 00:35:41 +0000 (0:00:01.980) 0:00:02.277 ******** 2026-03-28 00:35:56.464461 | orchestrator | changed: [testbed-manager] 2026-03-28 00:35:56.464468 | orchestrator | 2026-03-28 00:35:56.464474 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-03-28 00:35:56.464480 | orchestrator | Saturday 28 March 2026 00:35:48 +0000 (0:00:06.980) 0:00:09.258 ******** 2026-03-28 00:35:56.464486 | orchestrator | changed: [testbed-manager] 2026-03-28 00:35:56.464492 | orchestrator | 2026-03-28 00:35:56.464499 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-03-28 00:35:56.464507 | orchestrator | Saturday 28 March 2026 00:35:48 +0000 (0:00:00.566) 0:00:09.825 ******** 2026-03-28 00:35:56.464516 | orchestrator | changed: [testbed-manager] 2026-03-28 00:35:56.464545 | orchestrator | 2026-03-28 00:35:56.464551 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-03-28 00:35:56.464558 | orchestrator | Saturday 28 March 2026 00:35:49 +0000 (0:00:00.450) 0:00:10.276 ******** 2026-03-28 00:35:56.464564 | orchestrator | ok: [testbed-manager] 2026-03-28 00:35:56.464570 | orchestrator | 2026-03-28 00:35:56.464578 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-03-28 00:35:56.464584 | orchestrator | Saturday 28 March 2026 00:35:50 +0000 (0:00:00.569) 0:00:10.846 ******** 2026-03-28 00:35:56.464593 | orchestrator | ok: [testbed-manager] 2026-03-28 00:35:56.464601 | orchestrator | 2026-03-28 00:35:56.464610 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-03-28 00:35:56.464619 | orchestrator | Saturday 28 March 2026 00:35:50 +0000 (0:00:00.447) 0:00:11.294 ******** 2026-03-28 00:35:56.464624 | orchestrator | ok: [testbed-manager] 2026-03-28 00:35:56.464631 | orchestrator | 2026-03-28 00:35:56.464651 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-03-28 00:35:56.464661 | orchestrator | Saturday 28 March 2026 00:35:50 +0000 (0:00:00.474) 0:00:11.768 ******** 2026-03-28 00:35:56.464669 | orchestrator | changed: [testbed-manager] 2026-03-28 00:35:56.464677 | orchestrator | 2026-03-28 00:35:56.464684 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-03-28 00:35:56.464690 | orchestrator | Saturday 28 March 2026 00:35:52 +0000 (0:00:01.181) 0:00:12.950 ******** 2026-03-28 00:35:56.464697 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-28 00:35:56.464704 | orchestrator | changed: [testbed-manager] 2026-03-28 00:35:56.464710 | orchestrator | 2026-03-28 00:35:56.464717 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-03-28 00:35:56.464724 | orchestrator | Saturday 28 March 2026 00:35:53 +0000 (0:00:00.990) 0:00:13.940 ******** 2026-03-28 00:35:56.464731 | orchestrator | changed: [testbed-manager] 2026-03-28 00:35:56.464740 | orchestrator | 2026-03-28 00:35:56.464748 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-03-28 00:35:56.464757 | orchestrator | Saturday 28 March 2026 00:35:55 +0000 (0:00:02.145) 0:00:16.086 ******** 2026-03-28 00:35:56.464765 | orchestrator | changed: [testbed-manager] 2026-03-28 00:35:56.464773 | orchestrator | 2026-03-28 00:35:56.464781 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:35:56.464790 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:35:56.464800 | orchestrator | 2026-03-28 00:35:56.464808 | orchestrator | 2026-03-28 00:35:56.464817 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:35:56.464824 | orchestrator | Saturday 28 March 2026 00:35:56 +0000 (0:00:00.981) 0:00:17.067 ******** 2026-03-28 00:35:56.464831 | orchestrator | =============================================================================== 2026-03-28 00:35:56.464882 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.98s 2026-03-28 00:35:56.464892 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 2.15s 2026-03-28 00:35:56.464900 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.98s 2026-03-28 00:35:56.464909 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.18s 2026-03-28 00:35:56.464918 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.99s 2026-03-28 00:35:56.464927 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.98s 2026-03-28 00:35:56.464935 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.57s 2026-03-28 00:35:56.464944 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.57s 2026-03-28 00:35:56.464952 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.47s 2026-03-28 00:35:56.464961 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.45s 2026-03-28 00:35:56.464979 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.45s 2026-03-28 00:35:56.664790 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-03-28 00:35:56.696826 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-03-28 00:35:56.696924 | orchestrator | Dload Upload Total Spent Left Speed 2026-03-28 00:35:56.775090 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 191 0 --:--:-- --:--:-- --:--:-- 192 2026-03-28 00:35:56.789163 | orchestrator | + osism apply --environment custom workarounds 2026-03-28 00:35:58.113946 | orchestrator | 2026-03-28 00:35:58 | INFO  | Trying to run play workarounds in environment custom 2026-03-28 00:36:08.161801 | orchestrator | 2026-03-28 00:36:08 | INFO  | Prepare task for execution of workarounds. 2026-03-28 00:36:08.245295 | orchestrator | 2026-03-28 00:36:08 | INFO  | Task bdecf8c0-601b-463f-a082-30c9b13f4ec3 (workarounds) was prepared for execution. 2026-03-28 00:36:08.245395 | orchestrator | 2026-03-28 00:36:08 | INFO  | It takes a moment until task bdecf8c0-601b-463f-a082-30c9b13f4ec3 (workarounds) has been started and output is visible here. 2026-03-28 00:36:34.283793 | orchestrator | 2026-03-28 00:36:34.284058 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 00:36:34.284075 | orchestrator | 2026-03-28 00:36:34.284087 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-03-28 00:36:34.284099 | orchestrator | Saturday 28 March 2026 00:36:11 +0000 (0:00:00.185) 0:00:00.185 ******** 2026-03-28 00:36:34.284111 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-03-28 00:36:34.284122 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-03-28 00:36:34.284133 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-03-28 00:36:34.284143 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-03-28 00:36:34.284154 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-03-28 00:36:34.284165 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-03-28 00:36:34.284177 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-03-28 00:36:34.284188 | orchestrator | 2026-03-28 00:36:34.284199 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-03-28 00:36:34.284210 | orchestrator | 2026-03-28 00:36:34.284221 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-28 00:36:34.284242 | orchestrator | Saturday 28 March 2026 00:36:12 +0000 (0:00:00.770) 0:00:00.956 ******** 2026-03-28 00:36:34.284254 | orchestrator | ok: [testbed-manager] 2026-03-28 00:36:34.284266 | orchestrator | 2026-03-28 00:36:34.284277 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-03-28 00:36:34.284288 | orchestrator | 2026-03-28 00:36:34.284299 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-28 00:36:34.284310 | orchestrator | Saturday 28 March 2026 00:36:15 +0000 (0:00:02.972) 0:00:03.928 ******** 2026-03-28 00:36:34.284321 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:36:34.284334 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:36:34.284347 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:36:34.284359 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:36:34.284372 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:36:34.284384 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:36:34.284396 | orchestrator | 2026-03-28 00:36:34.284409 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-03-28 00:36:34.284421 | orchestrator | 2026-03-28 00:36:34.284434 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-03-28 00:36:34.284447 | orchestrator | Saturday 28 March 2026 00:36:17 +0000 (0:00:02.482) 0:00:06.411 ******** 2026-03-28 00:36:34.284480 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-28 00:36:34.284492 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-28 00:36:34.284503 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-28 00:36:34.284514 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-28 00:36:34.284525 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-28 00:36:34.284536 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-28 00:36:34.284546 | orchestrator | 2026-03-28 00:36:34.284557 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-03-28 00:36:34.284568 | orchestrator | Saturday 28 March 2026 00:36:19 +0000 (0:00:01.306) 0:00:07.718 ******** 2026-03-28 00:36:34.284579 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:36:34.284590 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:36:34.284601 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:36:34.284611 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:36:34.284622 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:36:34.284633 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:36:34.284644 | orchestrator | 2026-03-28 00:36:34.284654 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-03-28 00:36:34.284665 | orchestrator | Saturday 28 March 2026 00:36:22 +0000 (0:00:03.798) 0:00:11.517 ******** 2026-03-28 00:36:34.284678 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:36:34.284698 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:36:34.284715 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:36:34.284733 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:36:34.284750 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:36:34.284768 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:36:34.284784 | orchestrator | 2026-03-28 00:36:34.284826 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-03-28 00:36:34.284847 | orchestrator | 2026-03-28 00:36:34.284867 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-03-28 00:36:34.284887 | orchestrator | Saturday 28 March 2026 00:36:23 +0000 (0:00:00.547) 0:00:12.064 ******** 2026-03-28 00:36:34.284901 | orchestrator | changed: [testbed-manager] 2026-03-28 00:36:34.284912 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:36:34.284923 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:36:34.284934 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:36:34.284944 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:36:34.284955 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:36:34.284965 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:36:34.284975 | orchestrator | 2026-03-28 00:36:34.284986 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-03-28 00:36:34.284997 | orchestrator | Saturday 28 March 2026 00:36:25 +0000 (0:00:01.803) 0:00:13.867 ******** 2026-03-28 00:36:34.285008 | orchestrator | changed: [testbed-manager] 2026-03-28 00:36:34.285018 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:36:34.285029 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:36:34.285039 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:36:34.285050 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:36:34.285060 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:36:34.285092 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:36:34.285103 | orchestrator | 2026-03-28 00:36:34.285114 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-03-28 00:36:34.285125 | orchestrator | Saturday 28 March 2026 00:36:26 +0000 (0:00:01.536) 0:00:15.404 ******** 2026-03-28 00:36:34.285136 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:36:34.285147 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:36:34.285157 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:36:34.285178 | orchestrator | ok: [testbed-manager] 2026-03-28 00:36:34.285189 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:36:34.285199 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:36:34.285210 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:36:34.285220 | orchestrator | 2026-03-28 00:36:34.285231 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-03-28 00:36:34.285242 | orchestrator | Saturday 28 March 2026 00:36:28 +0000 (0:00:01.668) 0:00:17.073 ******** 2026-03-28 00:36:34.285253 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:36:34.285264 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:36:34.285282 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:36:34.285309 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:36:34.285330 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:36:34.285346 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:36:34.285363 | orchestrator | changed: [testbed-manager] 2026-03-28 00:36:34.285380 | orchestrator | 2026-03-28 00:36:34.285399 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-03-28 00:36:34.285418 | orchestrator | Saturday 28 March 2026 00:36:30 +0000 (0:00:02.178) 0:00:19.251 ******** 2026-03-28 00:36:34.285446 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:36:34.285458 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:36:34.285469 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:36:34.285479 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:36:34.285489 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:36:34.285500 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:36:34.285510 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:36:34.285521 | orchestrator | 2026-03-28 00:36:34.285532 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-03-28 00:36:34.285542 | orchestrator | 2026-03-28 00:36:34.285553 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-03-28 00:36:34.285563 | orchestrator | Saturday 28 March 2026 00:36:31 +0000 (0:00:00.779) 0:00:20.030 ******** 2026-03-28 00:36:34.285574 | orchestrator | ok: [testbed-manager] 2026-03-28 00:36:34.285584 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:36:34.285595 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:36:34.285605 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:36:34.285616 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:36:34.285626 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:36:34.285637 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:36:34.285647 | orchestrator | 2026-03-28 00:36:34.285658 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:36:34.285670 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 00:36:34.285682 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:36:34.285693 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:36:34.285703 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:36:34.285714 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:36:34.285725 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:36:34.285735 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:36:34.285746 | orchestrator | 2026-03-28 00:36:34.285757 | orchestrator | 2026-03-28 00:36:34.285767 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:36:34.285789 | orchestrator | Saturday 28 March 2026 00:36:34 +0000 (0:00:02.883) 0:00:22.913 ******** 2026-03-28 00:36:34.285829 | orchestrator | =============================================================================== 2026-03-28 00:36:34.285849 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.80s 2026-03-28 00:36:34.285867 | orchestrator | Apply netplan configuration --------------------------------------------- 2.97s 2026-03-28 00:36:34.285886 | orchestrator | Install python3-docker -------------------------------------------------- 2.88s 2026-03-28 00:36:34.285900 | orchestrator | Apply netplan configuration --------------------------------------------- 2.48s 2026-03-28 00:36:34.285910 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 2.18s 2026-03-28 00:36:34.285921 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.80s 2026-03-28 00:36:34.285932 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.67s 2026-03-28 00:36:34.285942 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.54s 2026-03-28 00:36:34.285953 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.31s 2026-03-28 00:36:34.285963 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.78s 2026-03-28 00:36:34.285974 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.77s 2026-03-28 00:36:34.285994 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.55s 2026-03-28 00:36:34.791286 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-03-28 00:36:46.128074 | orchestrator | 2026-03-28 00:36:46 | INFO  | Prepare task for execution of reboot. 2026-03-28 00:36:46.222190 | orchestrator | 2026-03-28 00:36:46 | INFO  | Task b6ca4094-ff69-41ec-bb62-ece24438360b (reboot) was prepared for execution. 2026-03-28 00:36:46.222289 | orchestrator | 2026-03-28 00:36:46 | INFO  | It takes a moment until task b6ca4094-ff69-41ec-bb62-ece24438360b (reboot) has been started and output is visible here. 2026-03-28 00:36:57.763719 | orchestrator | 2026-03-28 00:36:57.763895 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-28 00:36:57.763915 | orchestrator | 2026-03-28 00:36:57.763927 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-28 00:36:57.763939 | orchestrator | Saturday 28 March 2026 00:36:49 +0000 (0:00:00.258) 0:00:00.258 ******** 2026-03-28 00:36:57.763950 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:36:57.763963 | orchestrator | 2026-03-28 00:36:57.763974 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-28 00:36:57.763985 | orchestrator | Saturday 28 March 2026 00:36:49 +0000 (0:00:00.168) 0:00:00.426 ******** 2026-03-28 00:36:57.764012 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:36:57.764024 | orchestrator | 2026-03-28 00:36:57.764035 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-28 00:36:57.764046 | orchestrator | Saturday 28 March 2026 00:36:51 +0000 (0:00:01.311) 0:00:01.737 ******** 2026-03-28 00:36:57.764057 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:36:57.764068 | orchestrator | 2026-03-28 00:36:57.764079 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-28 00:36:57.764090 | orchestrator | 2026-03-28 00:36:57.764101 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-28 00:36:57.764112 | orchestrator | Saturday 28 March 2026 00:36:51 +0000 (0:00:00.109) 0:00:01.847 ******** 2026-03-28 00:36:57.764122 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:36:57.764133 | orchestrator | 2026-03-28 00:36:57.764144 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-28 00:36:57.764155 | orchestrator | Saturday 28 March 2026 00:36:51 +0000 (0:00:00.100) 0:00:01.948 ******** 2026-03-28 00:36:57.764166 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:36:57.764177 | orchestrator | 2026-03-28 00:36:57.764226 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-28 00:36:57.764247 | orchestrator | Saturday 28 March 2026 00:36:52 +0000 (0:00:01.024) 0:00:02.972 ******** 2026-03-28 00:36:57.764266 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:36:57.764284 | orchestrator | 2026-03-28 00:36:57.764298 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-28 00:36:57.764309 | orchestrator | 2026-03-28 00:36:57.764320 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-28 00:36:57.764331 | orchestrator | Saturday 28 March 2026 00:36:52 +0000 (0:00:00.120) 0:00:03.093 ******** 2026-03-28 00:36:57.764341 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:36:57.764352 | orchestrator | 2026-03-28 00:36:57.764362 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-28 00:36:57.764373 | orchestrator | Saturday 28 March 2026 00:36:52 +0000 (0:00:00.121) 0:00:03.214 ******** 2026-03-28 00:36:57.764384 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:36:57.764394 | orchestrator | 2026-03-28 00:36:57.764405 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-28 00:36:57.764416 | orchestrator | Saturday 28 March 2026 00:36:53 +0000 (0:00:01.028) 0:00:04.243 ******** 2026-03-28 00:36:57.764427 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:36:57.764437 | orchestrator | 2026-03-28 00:36:57.764448 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-28 00:36:57.764458 | orchestrator | 2026-03-28 00:36:57.764469 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-28 00:36:57.764479 | orchestrator | Saturday 28 March 2026 00:36:53 +0000 (0:00:00.112) 0:00:04.356 ******** 2026-03-28 00:36:57.764490 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:36:57.764500 | orchestrator | 2026-03-28 00:36:57.764511 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-28 00:36:57.764522 | orchestrator | Saturday 28 March 2026 00:36:53 +0000 (0:00:00.122) 0:00:04.478 ******** 2026-03-28 00:36:57.764532 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:36:57.764543 | orchestrator | 2026-03-28 00:36:57.764554 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-28 00:36:57.764564 | orchestrator | Saturday 28 March 2026 00:36:54 +0000 (0:00:01.047) 0:00:05.525 ******** 2026-03-28 00:36:57.764575 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:36:57.764585 | orchestrator | 2026-03-28 00:36:57.764596 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-28 00:36:57.764606 | orchestrator | 2026-03-28 00:36:57.764617 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-28 00:36:57.764628 | orchestrator | Saturday 28 March 2026 00:36:54 +0000 (0:00:00.132) 0:00:05.658 ******** 2026-03-28 00:36:57.764638 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:36:57.764649 | orchestrator | 2026-03-28 00:36:57.764660 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-28 00:36:57.764670 | orchestrator | Saturday 28 March 2026 00:36:55 +0000 (0:00:00.229) 0:00:05.888 ******** 2026-03-28 00:36:57.764681 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:36:57.764692 | orchestrator | 2026-03-28 00:36:57.764702 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-28 00:36:57.764713 | orchestrator | Saturday 28 March 2026 00:36:56 +0000 (0:00:01.050) 0:00:06.938 ******** 2026-03-28 00:36:57.764724 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:36:57.764734 | orchestrator | 2026-03-28 00:36:57.764745 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-28 00:36:57.764755 | orchestrator | 2026-03-28 00:36:57.764766 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-28 00:36:57.764813 | orchestrator | Saturday 28 March 2026 00:36:56 +0000 (0:00:00.113) 0:00:07.051 ******** 2026-03-28 00:36:57.764832 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:36:57.764851 | orchestrator | 2026-03-28 00:36:57.764871 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-28 00:36:57.764901 | orchestrator | Saturday 28 March 2026 00:36:56 +0000 (0:00:00.115) 0:00:07.167 ******** 2026-03-28 00:36:57.764912 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:36:57.764923 | orchestrator | 2026-03-28 00:36:57.764934 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-28 00:36:57.764944 | orchestrator | Saturday 28 March 2026 00:36:57 +0000 (0:00:01.000) 0:00:08.168 ******** 2026-03-28 00:36:57.764974 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:36:57.764986 | orchestrator | 2026-03-28 00:36:57.764997 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:36:57.765009 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:36:57.765021 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:36:57.765039 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:36:57.765050 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:36:57.765061 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:36:57.765071 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:36:57.765082 | orchestrator | 2026-03-28 00:36:57.765093 | orchestrator | 2026-03-28 00:36:57.765103 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:36:57.765114 | orchestrator | Saturday 28 March 2026 00:36:57 +0000 (0:00:00.035) 0:00:08.204 ******** 2026-03-28 00:36:57.765124 | orchestrator | =============================================================================== 2026-03-28 00:36:57.765135 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 6.46s 2026-03-28 00:36:57.765146 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.86s 2026-03-28 00:36:57.765156 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.63s 2026-03-28 00:36:57.970825 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-03-28 00:37:09.366922 | orchestrator | 2026-03-28 00:37:09 | INFO  | Prepare task for execution of wait-for-connection. 2026-03-28 00:37:09.444444 | orchestrator | 2026-03-28 00:37:09 | INFO  | Task d71c4bfe-2060-41f3-b08d-b6a5896f3010 (wait-for-connection) was prepared for execution. 2026-03-28 00:37:09.444565 | orchestrator | 2026-03-28 00:37:09 | INFO  | It takes a moment until task d71c4bfe-2060-41f3-b08d-b6a5896f3010 (wait-for-connection) has been started and output is visible here. 2026-03-28 00:37:24.568223 | orchestrator | 2026-03-28 00:37:24.568326 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-03-28 00:37:24.568338 | orchestrator | 2026-03-28 00:37:24.568348 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-03-28 00:37:24.568356 | orchestrator | Saturday 28 March 2026 00:37:12 +0000 (0:00:00.319) 0:00:00.319 ******** 2026-03-28 00:37:24.568365 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:37:24.568374 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:37:24.568383 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:37:24.568392 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:37:24.568400 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:37:24.568407 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:37:24.568416 | orchestrator | 2026-03-28 00:37:24.568424 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:37:24.568457 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:37:24.568479 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:37:24.568488 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:37:24.568496 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:37:24.568503 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:37:24.568511 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:37:24.568519 | orchestrator | 2026-03-28 00:37:24.568527 | orchestrator | 2026-03-28 00:37:24.568535 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:37:24.568542 | orchestrator | Saturday 28 March 2026 00:37:24 +0000 (0:00:11.484) 0:00:11.803 ******** 2026-03-28 00:37:24.568550 | orchestrator | =============================================================================== 2026-03-28 00:37:24.568558 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.48s 2026-03-28 00:37:24.775799 | orchestrator | + osism apply hddtemp 2026-03-28 00:37:36.291064 | orchestrator | 2026-03-28 00:37:36 | INFO  | Prepare task for execution of hddtemp. 2026-03-28 00:37:36.372207 | orchestrator | 2026-03-28 00:37:36 | INFO  | Task 0ead6098-05f7-43bf-8fe7-3f74fe57f389 (hddtemp) was prepared for execution. 2026-03-28 00:37:36.372311 | orchestrator | 2026-03-28 00:37:36 | INFO  | It takes a moment until task 0ead6098-05f7-43bf-8fe7-3f74fe57f389 (hddtemp) has been started and output is visible here. 2026-03-28 00:38:03.013025 | orchestrator | 2026-03-28 00:38:03.013190 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-03-28 00:38:03.013229 | orchestrator | 2026-03-28 00:38:03.013248 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-03-28 00:38:03.013265 | orchestrator | Saturday 28 March 2026 00:37:39 +0000 (0:00:00.358) 0:00:00.358 ******** 2026-03-28 00:38:03.013281 | orchestrator | ok: [testbed-manager] 2026-03-28 00:38:03.013299 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:38:03.013314 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:38:03.013329 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:38:03.013363 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:38:03.013380 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:38:03.013395 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:38:03.013411 | orchestrator | 2026-03-28 00:38:03.013427 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-03-28 00:38:03.013443 | orchestrator | Saturday 28 March 2026 00:37:40 +0000 (0:00:00.623) 0:00:00.982 ******** 2026-03-28 00:38:03.013462 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:38:03.013482 | orchestrator | 2026-03-28 00:38:03.013499 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-03-28 00:38:03.013515 | orchestrator | Saturday 28 March 2026 00:37:41 +0000 (0:00:01.158) 0:00:02.140 ******** 2026-03-28 00:38:03.013530 | orchestrator | ok: [testbed-manager] 2026-03-28 00:38:03.013547 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:38:03.013604 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:38:03.013623 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:38:03.013640 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:38:03.013656 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:38:03.013669 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:38:03.013708 | orchestrator | 2026-03-28 00:38:03.013720 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-03-28 00:38:03.013756 | orchestrator | Saturday 28 March 2026 00:37:43 +0000 (0:00:02.417) 0:00:04.557 ******** 2026-03-28 00:38:03.013767 | orchestrator | changed: [testbed-manager] 2026-03-28 00:38:03.013781 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:38:03.013792 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:38:03.013803 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:38:03.013814 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:38:03.013825 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:38:03.013836 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:38:03.013848 | orchestrator | 2026-03-28 00:38:03.013859 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-03-28 00:38:03.013871 | orchestrator | Saturday 28 March 2026 00:37:44 +0000 (0:00:00.922) 0:00:05.480 ******** 2026-03-28 00:38:03.013882 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:38:03.013893 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:38:03.013904 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:38:03.013916 | orchestrator | ok: [testbed-manager] 2026-03-28 00:38:03.013928 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:38:03.013939 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:38:03.013949 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:38:03.013958 | orchestrator | 2026-03-28 00:38:03.013968 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-03-28 00:38:03.013978 | orchestrator | Saturday 28 March 2026 00:37:46 +0000 (0:00:01.425) 0:00:06.906 ******** 2026-03-28 00:38:03.013987 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:38:03.013997 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:38:03.014006 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:38:03.014129 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:38:03.014143 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:38:03.014153 | orchestrator | changed: [testbed-manager] 2026-03-28 00:38:03.014162 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:38:03.014172 | orchestrator | 2026-03-28 00:38:03.014182 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-03-28 00:38:03.014191 | orchestrator | Saturday 28 March 2026 00:37:47 +0000 (0:00:00.720) 0:00:07.626 ******** 2026-03-28 00:38:03.014201 | orchestrator | changed: [testbed-manager] 2026-03-28 00:38:03.014210 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:38:03.014220 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:38:03.014231 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:38:03.014240 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:38:03.014250 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:38:03.014259 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:38:03.014269 | orchestrator | 2026-03-28 00:38:03.014278 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-03-28 00:38:03.014288 | orchestrator | Saturday 28 March 2026 00:37:59 +0000 (0:00:12.524) 0:00:20.151 ******** 2026-03-28 00:38:03.014298 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:38:03.014308 | orchestrator | 2026-03-28 00:38:03.014318 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-03-28 00:38:03.014328 | orchestrator | Saturday 28 March 2026 00:38:00 +0000 (0:00:01.220) 0:00:21.371 ******** 2026-03-28 00:38:03.014337 | orchestrator | changed: [testbed-manager] 2026-03-28 00:38:03.014347 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:38:03.014356 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:38:03.014366 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:38:03.014375 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:38:03.014385 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:38:03.014394 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:38:03.014404 | orchestrator | 2026-03-28 00:38:03.014413 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:38:03.014433 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:38:03.014467 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 00:38:03.014477 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 00:38:03.014490 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 00:38:03.014507 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 00:38:03.014523 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 00:38:03.014540 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 00:38:03.014556 | orchestrator | 2026-03-28 00:38:03.014571 | orchestrator | 2026-03-28 00:38:03.014588 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:38:03.014600 | orchestrator | Saturday 28 March 2026 00:38:02 +0000 (0:00:01.899) 0:00:23.271 ******** 2026-03-28 00:38:03.014610 | orchestrator | =============================================================================== 2026-03-28 00:38:03.014619 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.52s 2026-03-28 00:38:03.014629 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.42s 2026-03-28 00:38:03.014639 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.90s 2026-03-28 00:38:03.014648 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.43s 2026-03-28 00:38:03.014658 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.22s 2026-03-28 00:38:03.014667 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.16s 2026-03-28 00:38:03.014677 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 0.92s 2026-03-28 00:38:03.014686 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.72s 2026-03-28 00:38:03.014696 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.62s 2026-03-28 00:38:03.255872 | orchestrator | ++ semver latest 7.1.1 2026-03-28 00:38:03.321457 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-28 00:38:03.321578 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-28 00:38:03.321600 | orchestrator | + sudo systemctl restart manager.service 2026-03-28 00:38:16.721895 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-28 00:38:16.722142 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-28 00:38:16.722161 | orchestrator | + local max_attempts=60 2026-03-28 00:38:16.722173 | orchestrator | + local name=ceph-ansible 2026-03-28 00:38:16.722184 | orchestrator | + local attempt_num=1 2026-03-28 00:38:16.722207 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-28 00:38:16.754380 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-28 00:38:16.754459 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-28 00:38:16.754470 | orchestrator | + sleep 5 2026-03-28 00:38:21.759572 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-28 00:38:21.790413 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-28 00:38:21.790507 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-28 00:38:21.790521 | orchestrator | + sleep 5 2026-03-28 00:38:26.794203 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-28 00:38:26.838965 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-28 00:38:26.839062 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-28 00:38:26.839108 | orchestrator | + sleep 5 2026-03-28 00:38:31.844986 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-28 00:38:31.888664 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-28 00:38:31.888855 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-28 00:38:31.888877 | orchestrator | + sleep 5 2026-03-28 00:38:36.893855 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-28 00:38:36.934011 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-28 00:38:36.934123 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-28 00:38:36.934131 | orchestrator | + sleep 5 2026-03-28 00:38:41.939442 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-28 00:38:41.987679 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-28 00:38:41.987822 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-28 00:38:41.987837 | orchestrator | + sleep 5 2026-03-28 00:38:46.992473 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-28 00:38:47.034968 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-28 00:38:47.035041 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-28 00:38:47.035049 | orchestrator | + sleep 5 2026-03-28 00:38:52.040789 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-28 00:38:52.085442 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-28 00:38:52.085577 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-28 00:38:52.085603 | orchestrator | + sleep 5 2026-03-28 00:38:57.089527 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-28 00:38:57.133474 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-28 00:38:57.133568 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-28 00:38:57.133581 | orchestrator | + sleep 5 2026-03-28 00:39:02.138066 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-28 00:39:02.182261 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-28 00:39:02.182371 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-28 00:39:02.182387 | orchestrator | + sleep 5 2026-03-28 00:39:07.186055 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-28 00:39:07.224102 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-28 00:39:07.224214 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-28 00:39:07.224232 | orchestrator | + sleep 5 2026-03-28 00:39:12.229545 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-28 00:39:12.267287 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-28 00:39:12.267385 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-28 00:39:12.267401 | orchestrator | + sleep 5 2026-03-28 00:39:17.272027 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-28 00:39:17.314960 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-28 00:39:17.315050 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-28 00:39:17.315061 | orchestrator | + sleep 5 2026-03-28 00:39:22.321172 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-28 00:39:22.366361 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-28 00:39:22.366476 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-28 00:39:22.366492 | orchestrator | + local max_attempts=60 2026-03-28 00:39:22.366505 | orchestrator | + local name=kolla-ansible 2026-03-28 00:39:22.366516 | orchestrator | + local attempt_num=1 2026-03-28 00:39:22.367176 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-28 00:39:22.405896 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-28 00:39:22.406090 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-28 00:39:22.406121 | orchestrator | + local max_attempts=60 2026-03-28 00:39:22.406134 | orchestrator | + local name=osism-ansible 2026-03-28 00:39:22.406143 | orchestrator | + local attempt_num=1 2026-03-28 00:39:22.406783 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-28 00:39:22.446431 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-28 00:39:22.446527 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-28 00:39:22.446542 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-28 00:39:22.626967 | orchestrator | ARA in ceph-ansible already disabled. 2026-03-28 00:39:22.814174 | orchestrator | ARA in kolla-ansible already disabled. 2026-03-28 00:39:23.012051 | orchestrator | ARA in osism-ansible already disabled. 2026-03-28 00:39:23.196485 | orchestrator | ARA in osism-kubernetes already disabled. 2026-03-28 00:39:23.197259 | orchestrator | + osism apply gather-facts 2026-03-28 00:39:34.738096 | orchestrator | 2026-03-28 00:39:34 | INFO  | Prepare task for execution of gather-facts. 2026-03-28 00:39:34.815579 | orchestrator | 2026-03-28 00:39:34 | INFO  | Task 9b7d1a94-332e-4dad-a748-925a9ee2d175 (gather-facts) was prepared for execution. 2026-03-28 00:39:34.815659 | orchestrator | 2026-03-28 00:39:34 | INFO  | It takes a moment until task 9b7d1a94-332e-4dad-a748-925a9ee2d175 (gather-facts) has been started and output is visible here. 2026-03-28 00:39:45.652287 | orchestrator | 2026-03-28 00:39:45.652433 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-28 00:39:45.652464 | orchestrator | 2026-03-28 00:39:45.652486 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-28 00:39:45.652499 | orchestrator | Saturday 28 March 2026 00:39:38 +0000 (0:00:00.299) 0:00:00.299 ******** 2026-03-28 00:39:45.652510 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:39:45.652523 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:39:45.652534 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:39:45.652545 | orchestrator | ok: [testbed-manager] 2026-03-28 00:39:45.652555 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:39:45.652567 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:39:45.652578 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:39:45.652589 | orchestrator | 2026-03-28 00:39:45.652600 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-28 00:39:45.652611 | orchestrator | 2026-03-28 00:39:45.652622 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-28 00:39:45.652634 | orchestrator | Saturday 28 March 2026 00:39:44 +0000 (0:00:06.701) 0:00:07.001 ******** 2026-03-28 00:39:45.652645 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:39:45.652687 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:39:45.652699 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:39:45.652710 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:39:45.652721 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:39:45.652732 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:39:45.652743 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:39:45.652754 | orchestrator | 2026-03-28 00:39:45.652765 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:39:45.652776 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 00:39:45.652789 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 00:39:45.652802 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 00:39:45.652816 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 00:39:45.652828 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 00:39:45.652841 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 00:39:45.652854 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 00:39:45.652866 | orchestrator | 2026-03-28 00:39:45.652879 | orchestrator | 2026-03-28 00:39:45.652892 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:39:45.652905 | orchestrator | Saturday 28 March 2026 00:39:45 +0000 (0:00:00.646) 0:00:07.647 ******** 2026-03-28 00:39:45.652918 | orchestrator | =============================================================================== 2026-03-28 00:39:45.652930 | orchestrator | Gathers facts about hosts ----------------------------------------------- 6.70s 2026-03-28 00:39:45.652975 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.65s 2026-03-28 00:39:45.842913 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-03-28 00:39:45.864127 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-03-28 00:39:45.884296 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-03-28 00:39:45.897112 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-03-28 00:39:45.907004 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-03-28 00:39:45.916158 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-03-28 00:39:45.925653 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-03-28 00:39:45.935166 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-03-28 00:39:45.954103 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-03-28 00:39:45.968122 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-03-28 00:39:45.983065 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-03-28 00:39:45.996327 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-03-28 00:39:46.012737 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-03-28 00:39:46.029917 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-03-28 00:39:46.043872 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-03-28 00:39:46.057157 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-03-28 00:39:46.070719 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-03-28 00:39:46.085092 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-03-28 00:39:46.098739 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-03-28 00:39:46.111158 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-03-28 00:39:46.132036 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-03-28 00:39:46.150855 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-03-28 00:39:46.169182 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-03-28 00:39:46.180961 | orchestrator | + [[ false == \t\r\u\e ]] 2026-03-28 00:39:46.679363 | orchestrator | ok: Runtime: 0:24:22.913931 2026-03-28 00:39:46.790798 | 2026-03-28 00:39:46.790981 | TASK [Deploy services] 2026-03-28 00:39:47.325900 | orchestrator | skipping: Conditional result was False 2026-03-28 00:39:47.343398 | 2026-03-28 00:39:47.343560 | TASK [Deploy in a nutshell] 2026-03-28 00:39:48.041981 | orchestrator | 2026-03-28 00:39:48.042171 | orchestrator | # PULL IMAGES 2026-03-28 00:39:48.042183 | orchestrator | 2026-03-28 00:39:48.042188 | orchestrator | + set -e 2026-03-28 00:39:48.042195 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-28 00:39:48.042203 | orchestrator | ++ export INTERACTIVE=false 2026-03-28 00:39:48.042209 | orchestrator | ++ INTERACTIVE=false 2026-03-28 00:39:48.042230 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-28 00:39:48.042241 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-28 00:39:48.042250 | orchestrator | + source /opt/manager-vars.sh 2026-03-28 00:39:48.042256 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-28 00:39:48.042266 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-28 00:39:48.042272 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-28 00:39:48.042282 | orchestrator | ++ CEPH_VERSION=reef 2026-03-28 00:39:48.042289 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-28 00:39:48.042299 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-28 00:39:48.042305 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-28 00:39:48.042314 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-28 00:39:48.042321 | orchestrator | ++ export OPENSTACK_VERSION=2025.1 2026-03-28 00:39:48.042329 | orchestrator | ++ OPENSTACK_VERSION=2025.1 2026-03-28 00:39:48.042335 | orchestrator | ++ export ARA=false 2026-03-28 00:39:48.042341 | orchestrator | ++ ARA=false 2026-03-28 00:39:48.042349 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-28 00:39:48.042353 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-28 00:39:48.042357 | orchestrator | ++ export TEMPEST=true 2026-03-28 00:39:48.042361 | orchestrator | ++ TEMPEST=true 2026-03-28 00:39:48.042364 | orchestrator | ++ export IS_ZUUL=true 2026-03-28 00:39:48.042368 | orchestrator | ++ IS_ZUUL=true 2026-03-28 00:39:48.042372 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.109 2026-03-28 00:39:48.042376 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.109 2026-03-28 00:39:48.042380 | orchestrator | ++ export EXTERNAL_API=false 2026-03-28 00:39:48.042384 | orchestrator | ++ EXTERNAL_API=false 2026-03-28 00:39:48.042387 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-28 00:39:48.042391 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-28 00:39:48.042395 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-28 00:39:48.042399 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-28 00:39:48.042402 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-28 00:39:48.042406 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-28 00:39:48.042410 | orchestrator | + echo 2026-03-28 00:39:48.042414 | orchestrator | + echo '# PULL IMAGES' 2026-03-28 00:39:48.042418 | orchestrator | + echo 2026-03-28 00:39:48.043128 | orchestrator | ++ semver latest 7.0.0 2026-03-28 00:39:48.087177 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-28 00:39:48.087281 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-28 00:39:48.087297 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-03-28 00:39:49.397751 | orchestrator | 2026-03-28 00:39:49 | INFO  | Trying to run play pull-images in environment custom 2026-03-28 00:39:59.518699 | orchestrator | 2026-03-28 00:39:59 | INFO  | Prepare task for execution of pull-images. 2026-03-28 00:39:59.611398 | orchestrator | 2026-03-28 00:39:59 | INFO  | Task 0440169d-dba4-4de1-b4bf-aae67301f221 (pull-images) was prepared for execution. 2026-03-28 00:39:59.611506 | orchestrator | 2026-03-28 00:39:59 | INFO  | Task 0440169d-dba4-4de1-b4bf-aae67301f221 is running in background. No more output. Check ARA for logs. 2026-03-28 00:40:01.237495 | orchestrator | 2026-03-28 00:40:01 | INFO  | Trying to run play wipe-partitions in environment custom 2026-03-28 00:40:11.345040 | orchestrator | 2026-03-28 00:40:11 | INFO  | Prepare task for execution of wipe-partitions. 2026-03-28 00:40:11.444834 | orchestrator | 2026-03-28 00:40:11 | INFO  | Task 29b8a433-d2c8-45e3-ae09-2677bd714177 (wipe-partitions) was prepared for execution. 2026-03-28 00:40:11.444931 | orchestrator | 2026-03-28 00:40:11 | INFO  | It takes a moment until task 29b8a433-d2c8-45e3-ae09-2677bd714177 (wipe-partitions) has been started and output is visible here. 2026-03-28 00:40:23.477596 | orchestrator | 2026-03-28 00:40:23.477752 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-03-28 00:40:23.477769 | orchestrator | 2026-03-28 00:40:23.477782 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-03-28 00:40:23.477799 | orchestrator | Saturday 28 March 2026 00:40:14 +0000 (0:00:00.176) 0:00:00.177 ******** 2026-03-28 00:40:23.477839 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:40:23.477853 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:40:23.477864 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:40:23.477875 | orchestrator | 2026-03-28 00:40:23.477887 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-03-28 00:40:23.477898 | orchestrator | Saturday 28 March 2026 00:40:15 +0000 (0:00:01.004) 0:00:01.181 ******** 2026-03-28 00:40:23.477912 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:40:23.477924 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:40:23.477935 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:40:23.477945 | orchestrator | 2026-03-28 00:40:23.477956 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-03-28 00:40:23.477967 | orchestrator | Saturday 28 March 2026 00:40:16 +0000 (0:00:00.282) 0:00:01.463 ******** 2026-03-28 00:40:23.477978 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:40:23.477990 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:40:23.478001 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:40:23.478011 | orchestrator | 2026-03-28 00:40:23.478080 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-03-28 00:40:23.478092 | orchestrator | Saturday 28 March 2026 00:40:16 +0000 (0:00:00.603) 0:00:02.067 ******** 2026-03-28 00:40:23.478103 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:40:23.478114 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:40:23.478125 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:40:23.478135 | orchestrator | 2026-03-28 00:40:23.478146 | orchestrator | TASK [Check device availability] *********************************************** 2026-03-28 00:40:23.478157 | orchestrator | Saturday 28 March 2026 00:40:17 +0000 (0:00:00.260) 0:00:02.327 ******** 2026-03-28 00:40:23.478168 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-28 00:40:23.478183 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-28 00:40:23.478194 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-28 00:40:23.478204 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-28 00:40:23.478215 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-28 00:40:23.478226 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-28 00:40:23.478237 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-28 00:40:23.478247 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-28 00:40:23.478258 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-28 00:40:23.478268 | orchestrator | 2026-03-28 00:40:23.478279 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-03-28 00:40:23.478290 | orchestrator | Saturday 28 March 2026 00:40:18 +0000 (0:00:01.362) 0:00:03.690 ******** 2026-03-28 00:40:23.478301 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-03-28 00:40:23.478312 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-03-28 00:40:23.478323 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-03-28 00:40:23.478333 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-03-28 00:40:23.478344 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-03-28 00:40:23.478355 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-03-28 00:40:23.478366 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-03-28 00:40:23.478376 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-03-28 00:40:23.478386 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-03-28 00:40:23.478397 | orchestrator | 2026-03-28 00:40:23.478408 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-03-28 00:40:23.478418 | orchestrator | Saturday 28 March 2026 00:40:19 +0000 (0:00:01.390) 0:00:05.080 ******** 2026-03-28 00:40:23.478429 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-28 00:40:23.478440 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-28 00:40:23.478450 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-28 00:40:23.478468 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-28 00:40:23.478490 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-28 00:40:23.478501 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-28 00:40:23.478511 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-28 00:40:23.478522 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-28 00:40:23.478532 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-28 00:40:23.478543 | orchestrator | 2026-03-28 00:40:23.478554 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-03-28 00:40:23.478564 | orchestrator | Saturday 28 March 2026 00:40:21 +0000 (0:00:02.048) 0:00:07.128 ******** 2026-03-28 00:40:23.478575 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:40:23.478586 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:40:23.478596 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:40:23.478607 | orchestrator | 2026-03-28 00:40:23.478618 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-03-28 00:40:23.478629 | orchestrator | Saturday 28 March 2026 00:40:22 +0000 (0:00:00.591) 0:00:07.720 ******** 2026-03-28 00:40:23.478718 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:40:23.478730 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:40:23.478741 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:40:23.478752 | orchestrator | 2026-03-28 00:40:23.478763 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:40:23.478776 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:40:23.478788 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:40:23.478819 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:40:23.478831 | orchestrator | 2026-03-28 00:40:23.478842 | orchestrator | 2026-03-28 00:40:23.478853 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:40:23.478864 | orchestrator | Saturday 28 March 2026 00:40:23 +0000 (0:00:00.776) 0:00:08.496 ******** 2026-03-28 00:40:23.478875 | orchestrator | =============================================================================== 2026-03-28 00:40:23.478885 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.05s 2026-03-28 00:40:23.478896 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.39s 2026-03-28 00:40:23.478906 | orchestrator | Check device availability ----------------------------------------------- 1.36s 2026-03-28 00:40:23.478919 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 1.00s 2026-03-28 00:40:23.478938 | orchestrator | Request device events from the kernel ----------------------------------- 0.78s 2026-03-28 00:40:23.478957 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.60s 2026-03-28 00:40:23.478975 | orchestrator | Reload udev rules ------------------------------------------------------- 0.59s 2026-03-28 00:40:23.478992 | orchestrator | Remove all rook related logical devices --------------------------------- 0.28s 2026-03-28 00:40:23.479010 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.26s 2026-03-28 00:40:35.095524 | orchestrator | 2026-03-28 00:40:35 | INFO  | Prepare task for execution of facts. 2026-03-28 00:40:35.191759 | orchestrator | 2026-03-28 00:40:35 | INFO  | Task 3f68df7e-9634-46b5-b8bf-5fcbd80fd541 (facts) was prepared for execution. 2026-03-28 00:40:35.191857 | orchestrator | 2026-03-28 00:40:35 | INFO  | It takes a moment until task 3f68df7e-9634-46b5-b8bf-5fcbd80fd541 (facts) has been started and output is visible here. 2026-03-28 00:40:46.699082 | orchestrator | 2026-03-28 00:40:46.699153 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-28 00:40:46.699162 | orchestrator | 2026-03-28 00:40:46.699185 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-28 00:40:46.699192 | orchestrator | Saturday 28 March 2026 00:40:38 +0000 (0:00:00.375) 0:00:00.375 ******** 2026-03-28 00:40:46.699198 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:40:46.699205 | orchestrator | ok: [testbed-manager] 2026-03-28 00:40:46.699211 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:40:46.699217 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:40:46.699223 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:40:46.699229 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:40:46.699235 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:40:46.699241 | orchestrator | 2026-03-28 00:40:46.699259 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-28 00:40:46.699265 | orchestrator | Saturday 28 March 2026 00:40:40 +0000 (0:00:01.332) 0:00:01.707 ******** 2026-03-28 00:40:46.699271 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:40:46.699278 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:40:46.699284 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:40:46.699290 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:40:46.699297 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:40:46.699302 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:40:46.699309 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:40:46.699315 | orchestrator | 2026-03-28 00:40:46.699321 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-28 00:40:46.699327 | orchestrator | 2026-03-28 00:40:46.699334 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-28 00:40:46.699340 | orchestrator | Saturday 28 March 2026 00:40:41 +0000 (0:00:01.277) 0:00:02.985 ******** 2026-03-28 00:40:46.699347 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:40:46.699353 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:40:46.699359 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:40:46.699365 | orchestrator | ok: [testbed-manager] 2026-03-28 00:40:46.699371 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:40:46.699377 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:40:46.699383 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:40:46.699389 | orchestrator | 2026-03-28 00:40:46.699396 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-28 00:40:46.699402 | orchestrator | 2026-03-28 00:40:46.699408 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-28 00:40:46.699415 | orchestrator | Saturday 28 March 2026 00:40:45 +0000 (0:00:04.626) 0:00:07.611 ******** 2026-03-28 00:40:46.699421 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:40:46.699427 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:40:46.699433 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:40:46.699439 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:40:46.699445 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:40:46.699451 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:40:46.699457 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:40:46.699463 | orchestrator | 2026-03-28 00:40:46.699469 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:40:46.699476 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:40:46.699483 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:40:46.699489 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:40:46.699495 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:40:46.699501 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:40:46.699513 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:40:46.699519 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:40:46.699525 | orchestrator | 2026-03-28 00:40:46.699531 | orchestrator | 2026-03-28 00:40:46.699538 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:40:46.699544 | orchestrator | Saturday 28 March 2026 00:40:46 +0000 (0:00:00.489) 0:00:08.100 ******** 2026-03-28 00:40:46.699550 | orchestrator | =============================================================================== 2026-03-28 00:40:46.699556 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.63s 2026-03-28 00:40:46.699562 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.33s 2026-03-28 00:40:46.699568 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.28s 2026-03-28 00:40:46.699574 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.49s 2026-03-28 00:40:48.077769 | orchestrator | 2026-03-28 00:40:48 | INFO  | Prepare task for execution of ceph-configure-lvm-volumes. 2026-03-28 00:40:48.149598 | orchestrator | 2026-03-28 00:40:48 | INFO  | Task 75f2db06-4772-491f-8e88-b2a8b67195f0 (ceph-configure-lvm-volumes) was prepared for execution. 2026-03-28 00:40:48.149662 | orchestrator | 2026-03-28 00:40:48 | INFO  | It takes a moment until task 75f2db06-4772-491f-8e88-b2a8b67195f0 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-03-28 00:41:00.081807 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-28 00:41:00.081922 | orchestrator | 2.16.14 2026-03-28 00:41:00.081939 | orchestrator | 2026-03-28 00:41:00.081964 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-28 00:41:00.081977 | orchestrator | 2026-03-28 00:41:00.081989 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-28 00:41:00.082000 | orchestrator | Saturday 28 March 2026 00:40:52 +0000 (0:00:00.299) 0:00:00.300 ******** 2026-03-28 00:41:00.082012 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-28 00:41:00.082082 | orchestrator | 2026-03-28 00:41:00.082094 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-28 00:41:00.082105 | orchestrator | Saturday 28 March 2026 00:40:52 +0000 (0:00:00.229) 0:00:00.529 ******** 2026-03-28 00:41:00.082148 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:41:00.082162 | orchestrator | 2026-03-28 00:41:00.082173 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:41:00.082184 | orchestrator | Saturday 28 March 2026 00:40:52 +0000 (0:00:00.216) 0:00:00.745 ******** 2026-03-28 00:41:00.082196 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-28 00:41:00.082206 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-28 00:41:00.082218 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-28 00:41:00.082229 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-28 00:41:00.082240 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-28 00:41:00.082251 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-28 00:41:00.082262 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-28 00:41:00.082272 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-28 00:41:00.082283 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-28 00:41:00.082294 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-28 00:41:00.082332 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-28 00:41:00.082345 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-28 00:41:00.082359 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-28 00:41:00.082372 | orchestrator | 2026-03-28 00:41:00.082385 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:41:00.082396 | orchestrator | Saturday 28 March 2026 00:40:53 +0000 (0:00:00.372) 0:00:01.118 ******** 2026-03-28 00:41:00.082406 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:41:00.082418 | orchestrator | 2026-03-28 00:41:00.082428 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:41:00.082439 | orchestrator | Saturday 28 March 2026 00:40:53 +0000 (0:00:00.493) 0:00:01.612 ******** 2026-03-28 00:41:00.082450 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:41:00.082461 | orchestrator | 2026-03-28 00:41:00.082471 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:41:00.082486 | orchestrator | Saturday 28 March 2026 00:40:53 +0000 (0:00:00.193) 0:00:01.805 ******** 2026-03-28 00:41:00.082497 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:41:00.082508 | orchestrator | 2026-03-28 00:41:00.082519 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:41:00.082529 | orchestrator | Saturday 28 March 2026 00:40:54 +0000 (0:00:00.238) 0:00:02.044 ******** 2026-03-28 00:41:00.082541 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:41:00.082551 | orchestrator | 2026-03-28 00:41:00.082562 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:41:00.082573 | orchestrator | Saturday 28 March 2026 00:40:54 +0000 (0:00:00.196) 0:00:02.240 ******** 2026-03-28 00:41:00.082584 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:41:00.082595 | orchestrator | 2026-03-28 00:41:00.082605 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:41:00.082641 | orchestrator | Saturday 28 March 2026 00:40:54 +0000 (0:00:00.188) 0:00:02.428 ******** 2026-03-28 00:41:00.082652 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:41:00.082663 | orchestrator | 2026-03-28 00:41:00.082674 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:41:00.082684 | orchestrator | Saturday 28 March 2026 00:40:54 +0000 (0:00:00.193) 0:00:02.622 ******** 2026-03-28 00:41:00.082695 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:41:00.082706 | orchestrator | 2026-03-28 00:41:00.082717 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:41:00.082728 | orchestrator | Saturday 28 March 2026 00:40:54 +0000 (0:00:00.194) 0:00:02.817 ******** 2026-03-28 00:41:00.082739 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:41:00.082750 | orchestrator | 2026-03-28 00:41:00.082760 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:41:00.082771 | orchestrator | Saturday 28 March 2026 00:40:55 +0000 (0:00:00.221) 0:00:03.039 ******** 2026-03-28 00:41:00.082782 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_3c51dbd4-3dd9-4220-b480-983204e78537) 2026-03-28 00:41:00.082794 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_3c51dbd4-3dd9-4220-b480-983204e78537) 2026-03-28 00:41:00.082805 | orchestrator | 2026-03-28 00:41:00.082815 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:41:00.082844 | orchestrator | Saturday 28 March 2026 00:40:55 +0000 (0:00:00.476) 0:00:03.515 ******** 2026-03-28 00:41:00.082856 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_78ac07d6-a998-431a-8632-f54c89645a8d) 2026-03-28 00:41:00.082867 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_78ac07d6-a998-431a-8632-f54c89645a8d) 2026-03-28 00:41:00.082877 | orchestrator | 2026-03-28 00:41:00.082888 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:41:00.082908 | orchestrator | Saturday 28 March 2026 00:40:56 +0000 (0:00:00.402) 0:00:03.917 ******** 2026-03-28 00:41:00.082919 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_af575ecf-0cf6-48aa-a1b6-43f16240ccad) 2026-03-28 00:41:00.082930 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_af575ecf-0cf6-48aa-a1b6-43f16240ccad) 2026-03-28 00:41:00.082941 | orchestrator | 2026-03-28 00:41:00.082951 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:41:00.082962 | orchestrator | Saturday 28 March 2026 00:40:56 +0000 (0:00:00.670) 0:00:04.588 ******** 2026-03-28 00:41:00.082973 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_d2c41d1e-c1aa-422a-bc56-ab0bbd118726) 2026-03-28 00:41:00.082984 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_d2c41d1e-c1aa-422a-bc56-ab0bbd118726) 2026-03-28 00:41:00.082994 | orchestrator | 2026-03-28 00:41:00.083005 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:41:00.083016 | orchestrator | Saturday 28 March 2026 00:40:57 +0000 (0:00:00.672) 0:00:05.260 ******** 2026-03-28 00:41:00.083027 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-28 00:41:00.083037 | orchestrator | 2026-03-28 00:41:00.083048 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:41:00.083059 | orchestrator | Saturday 28 March 2026 00:40:58 +0000 (0:00:00.817) 0:00:06.078 ******** 2026-03-28 00:41:00.083076 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-28 00:41:00.083087 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-28 00:41:00.083097 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-28 00:41:00.083108 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-28 00:41:00.083119 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-28 00:41:00.083129 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-28 00:41:00.083140 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-28 00:41:00.083150 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-28 00:41:00.083161 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-28 00:41:00.083172 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-28 00:41:00.083182 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-28 00:41:00.083193 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-28 00:41:00.083204 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-28 00:41:00.083215 | orchestrator | 2026-03-28 00:41:00.083226 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:41:00.083236 | orchestrator | Saturday 28 March 2026 00:40:58 +0000 (0:00:00.393) 0:00:06.472 ******** 2026-03-28 00:41:00.083247 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:41:00.083258 | orchestrator | 2026-03-28 00:41:00.083269 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:41:00.083280 | orchestrator | Saturday 28 March 2026 00:40:58 +0000 (0:00:00.210) 0:00:06.682 ******** 2026-03-28 00:41:00.083291 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:41:00.083301 | orchestrator | 2026-03-28 00:41:00.083312 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:41:00.083323 | orchestrator | Saturday 28 March 2026 00:40:59 +0000 (0:00:00.216) 0:00:06.899 ******** 2026-03-28 00:41:00.083334 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:41:00.083351 | orchestrator | 2026-03-28 00:41:00.083362 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:41:00.083373 | orchestrator | Saturday 28 March 2026 00:40:59 +0000 (0:00:00.204) 0:00:07.103 ******** 2026-03-28 00:41:00.083384 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:41:00.083395 | orchestrator | 2026-03-28 00:41:00.083405 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:41:00.083416 | orchestrator | Saturday 28 March 2026 00:40:59 +0000 (0:00:00.218) 0:00:07.321 ******** 2026-03-28 00:41:00.083427 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:41:00.083438 | orchestrator | 2026-03-28 00:41:00.083454 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:41:00.083466 | orchestrator | Saturday 28 March 2026 00:40:59 +0000 (0:00:00.200) 0:00:07.522 ******** 2026-03-28 00:41:00.083477 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:41:00.083487 | orchestrator | 2026-03-28 00:41:00.083498 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:41:00.083509 | orchestrator | Saturday 28 March 2026 00:40:59 +0000 (0:00:00.242) 0:00:07.764 ******** 2026-03-28 00:41:00.083520 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:41:00.083531 | orchestrator | 2026-03-28 00:41:00.083547 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:41:08.026977 | orchestrator | Saturday 28 March 2026 00:41:00 +0000 (0:00:00.189) 0:00:07.954 ******** 2026-03-28 00:41:08.027092 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:41:08.027109 | orchestrator | 2026-03-28 00:41:08.027121 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:41:08.027132 | orchestrator | Saturday 28 March 2026 00:41:00 +0000 (0:00:00.228) 0:00:08.183 ******** 2026-03-28 00:41:08.027143 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-28 00:41:08.027155 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-28 00:41:08.027166 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-28 00:41:08.027177 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-28 00:41:08.027188 | orchestrator | 2026-03-28 00:41:08.027199 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:41:08.027210 | orchestrator | Saturday 28 March 2026 00:41:01 +0000 (0:00:01.053) 0:00:09.236 ******** 2026-03-28 00:41:08.027221 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:41:08.027231 | orchestrator | 2026-03-28 00:41:08.027242 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:41:08.027253 | orchestrator | Saturday 28 March 2026 00:41:01 +0000 (0:00:00.224) 0:00:09.461 ******** 2026-03-28 00:41:08.027264 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:41:08.027275 | orchestrator | 2026-03-28 00:41:08.027285 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:41:08.027296 | orchestrator | Saturday 28 March 2026 00:41:01 +0000 (0:00:00.215) 0:00:09.676 ******** 2026-03-28 00:41:08.027307 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:41:08.027317 | orchestrator | 2026-03-28 00:41:08.027328 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:41:08.027339 | orchestrator | Saturday 28 March 2026 00:41:02 +0000 (0:00:00.223) 0:00:09.900 ******** 2026-03-28 00:41:08.027349 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:41:08.027360 | orchestrator | 2026-03-28 00:41:08.027371 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-28 00:41:08.027382 | orchestrator | Saturday 28 March 2026 00:41:02 +0000 (0:00:00.197) 0:00:10.098 ******** 2026-03-28 00:41:08.027393 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-03-28 00:41:08.027404 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-03-28 00:41:08.027414 | orchestrator | 2026-03-28 00:41:08.027425 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-28 00:41:08.027436 | orchestrator | Saturday 28 March 2026 00:41:02 +0000 (0:00:00.167) 0:00:10.265 ******** 2026-03-28 00:41:08.027479 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:41:08.027499 | orchestrator | 2026-03-28 00:41:08.027521 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-28 00:41:08.027546 | orchestrator | Saturday 28 March 2026 00:41:02 +0000 (0:00:00.138) 0:00:10.404 ******** 2026-03-28 00:41:08.027562 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:41:08.027578 | orchestrator | 2026-03-28 00:41:08.027599 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-28 00:41:08.027647 | orchestrator | Saturday 28 March 2026 00:41:02 +0000 (0:00:00.137) 0:00:10.541 ******** 2026-03-28 00:41:08.027665 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:41:08.027682 | orchestrator | 2026-03-28 00:41:08.027700 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-28 00:41:08.027718 | orchestrator | Saturday 28 March 2026 00:41:02 +0000 (0:00:00.134) 0:00:10.676 ******** 2026-03-28 00:41:08.027736 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:41:08.027755 | orchestrator | 2026-03-28 00:41:08.027773 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-28 00:41:08.027791 | orchestrator | Saturday 28 March 2026 00:41:02 +0000 (0:00:00.123) 0:00:10.799 ******** 2026-03-28 00:41:08.027810 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3eb28a65-49e9-527a-93b6-39f945444b2a'}}) 2026-03-28 00:41:08.027824 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8c246942-827f-54a7-8a08-735105fd2fd0'}}) 2026-03-28 00:41:08.027835 | orchestrator | 2026-03-28 00:41:08.027846 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-28 00:41:08.027857 | orchestrator | Saturday 28 March 2026 00:41:03 +0000 (0:00:00.171) 0:00:10.971 ******** 2026-03-28 00:41:08.027868 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3eb28a65-49e9-527a-93b6-39f945444b2a'}})  2026-03-28 00:41:08.027893 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8c246942-827f-54a7-8a08-735105fd2fd0'}})  2026-03-28 00:41:08.027905 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:41:08.027915 | orchestrator | 2026-03-28 00:41:08.027926 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-28 00:41:08.027936 | orchestrator | Saturday 28 March 2026 00:41:03 +0000 (0:00:00.156) 0:00:11.127 ******** 2026-03-28 00:41:08.027947 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3eb28a65-49e9-527a-93b6-39f945444b2a'}})  2026-03-28 00:41:08.027958 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8c246942-827f-54a7-8a08-735105fd2fd0'}})  2026-03-28 00:41:08.027969 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:41:08.027979 | orchestrator | 2026-03-28 00:41:08.027990 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-28 00:41:08.028001 | orchestrator | Saturday 28 March 2026 00:41:03 +0000 (0:00:00.149) 0:00:11.277 ******** 2026-03-28 00:41:08.028011 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3eb28a65-49e9-527a-93b6-39f945444b2a'}})  2026-03-28 00:41:08.028042 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8c246942-827f-54a7-8a08-735105fd2fd0'}})  2026-03-28 00:41:08.028054 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:41:08.028065 | orchestrator | 2026-03-28 00:41:08.028075 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-28 00:41:08.028086 | orchestrator | Saturday 28 March 2026 00:41:03 +0000 (0:00:00.365) 0:00:11.643 ******** 2026-03-28 00:41:08.028097 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:41:08.028107 | orchestrator | 2026-03-28 00:41:08.028118 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-28 00:41:08.028128 | orchestrator | Saturday 28 March 2026 00:41:03 +0000 (0:00:00.136) 0:00:11.779 ******** 2026-03-28 00:41:08.028139 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:41:08.028163 | orchestrator | 2026-03-28 00:41:08.028174 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-28 00:41:08.028184 | orchestrator | Saturday 28 March 2026 00:41:04 +0000 (0:00:00.129) 0:00:11.908 ******** 2026-03-28 00:41:08.028195 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:41:08.028206 | orchestrator | 2026-03-28 00:41:08.028228 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-28 00:41:08.028239 | orchestrator | Saturday 28 March 2026 00:41:04 +0000 (0:00:00.138) 0:00:12.046 ******** 2026-03-28 00:41:08.028250 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:41:08.028261 | orchestrator | 2026-03-28 00:41:08.028271 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-28 00:41:08.028282 | orchestrator | Saturday 28 March 2026 00:41:04 +0000 (0:00:00.158) 0:00:12.204 ******** 2026-03-28 00:41:08.028292 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:41:08.028303 | orchestrator | 2026-03-28 00:41:08.028314 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-28 00:41:08.028325 | orchestrator | Saturday 28 March 2026 00:41:04 +0000 (0:00:00.141) 0:00:12.346 ******** 2026-03-28 00:41:08.028335 | orchestrator | ok: [testbed-node-3] => { 2026-03-28 00:41:08.028346 | orchestrator |  "ceph_osd_devices": { 2026-03-28 00:41:08.028357 | orchestrator |  "sdb": { 2026-03-28 00:41:08.028368 | orchestrator |  "osd_lvm_uuid": "3eb28a65-49e9-527a-93b6-39f945444b2a" 2026-03-28 00:41:08.028379 | orchestrator |  }, 2026-03-28 00:41:08.028390 | orchestrator |  "sdc": { 2026-03-28 00:41:08.028400 | orchestrator |  "osd_lvm_uuid": "8c246942-827f-54a7-8a08-735105fd2fd0" 2026-03-28 00:41:08.028411 | orchestrator |  } 2026-03-28 00:41:08.028422 | orchestrator |  } 2026-03-28 00:41:08.028433 | orchestrator | } 2026-03-28 00:41:08.028444 | orchestrator | 2026-03-28 00:41:08.028455 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-28 00:41:08.028465 | orchestrator | Saturday 28 March 2026 00:41:04 +0000 (0:00:00.142) 0:00:12.488 ******** 2026-03-28 00:41:08.028476 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:41:08.028487 | orchestrator | 2026-03-28 00:41:08.028497 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-28 00:41:08.028508 | orchestrator | Saturday 28 March 2026 00:41:04 +0000 (0:00:00.147) 0:00:12.636 ******** 2026-03-28 00:41:08.028519 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:41:08.028530 | orchestrator | 2026-03-28 00:41:08.028540 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-28 00:41:08.028551 | orchestrator | Saturday 28 March 2026 00:41:04 +0000 (0:00:00.127) 0:00:12.764 ******** 2026-03-28 00:41:08.028562 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:41:08.028572 | orchestrator | 2026-03-28 00:41:08.028583 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-28 00:41:08.028594 | orchestrator | Saturday 28 March 2026 00:41:05 +0000 (0:00:00.127) 0:00:12.891 ******** 2026-03-28 00:41:08.028631 | orchestrator | changed: [testbed-node-3] => { 2026-03-28 00:41:08.028643 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-28 00:41:08.028654 | orchestrator |  "ceph_osd_devices": { 2026-03-28 00:41:08.028665 | orchestrator |  "sdb": { 2026-03-28 00:41:08.028676 | orchestrator |  "osd_lvm_uuid": "3eb28a65-49e9-527a-93b6-39f945444b2a" 2026-03-28 00:41:08.028686 | orchestrator |  }, 2026-03-28 00:41:08.028697 | orchestrator |  "sdc": { 2026-03-28 00:41:08.028708 | orchestrator |  "osd_lvm_uuid": "8c246942-827f-54a7-8a08-735105fd2fd0" 2026-03-28 00:41:08.028719 | orchestrator |  } 2026-03-28 00:41:08.028729 | orchestrator |  }, 2026-03-28 00:41:08.028740 | orchestrator |  "lvm_volumes": [ 2026-03-28 00:41:08.028751 | orchestrator |  { 2026-03-28 00:41:08.028762 | orchestrator |  "data": "osd-block-3eb28a65-49e9-527a-93b6-39f945444b2a", 2026-03-28 00:41:08.028772 | orchestrator |  "data_vg": "ceph-3eb28a65-49e9-527a-93b6-39f945444b2a" 2026-03-28 00:41:08.028790 | orchestrator |  }, 2026-03-28 00:41:08.028801 | orchestrator |  { 2026-03-28 00:41:08.028812 | orchestrator |  "data": "osd-block-8c246942-827f-54a7-8a08-735105fd2fd0", 2026-03-28 00:41:08.028823 | orchestrator |  "data_vg": "ceph-8c246942-827f-54a7-8a08-735105fd2fd0" 2026-03-28 00:41:08.028833 | orchestrator |  } 2026-03-28 00:41:08.028844 | orchestrator |  ] 2026-03-28 00:41:08.028855 | orchestrator |  } 2026-03-28 00:41:08.028866 | orchestrator | } 2026-03-28 00:41:08.028877 | orchestrator | 2026-03-28 00:41:08.028887 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-28 00:41:08.028898 | orchestrator | Saturday 28 March 2026 00:41:05 +0000 (0:00:00.224) 0:00:13.116 ******** 2026-03-28 00:41:08.028908 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-28 00:41:08.028919 | orchestrator | 2026-03-28 00:41:08.028930 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-28 00:41:08.028940 | orchestrator | 2026-03-28 00:41:08.028951 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-28 00:41:08.028962 | orchestrator | Saturday 28 March 2026 00:41:07 +0000 (0:00:02.280) 0:00:15.396 ******** 2026-03-28 00:41:08.028972 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-28 00:41:08.028983 | orchestrator | 2026-03-28 00:41:08.028999 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-28 00:41:08.029010 | orchestrator | Saturday 28 March 2026 00:41:07 +0000 (0:00:00.266) 0:00:15.662 ******** 2026-03-28 00:41:08.029021 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:41:08.029032 | orchestrator | 2026-03-28 00:41:08.029049 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:41:15.273954 | orchestrator | Saturday 28 March 2026 00:41:08 +0000 (0:00:00.237) 0:00:15.899 ******** 2026-03-28 00:41:15.274050 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-28 00:41:15.274060 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-28 00:41:15.274067 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-28 00:41:15.274073 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-28 00:41:15.274079 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-28 00:41:15.274085 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-28 00:41:15.274091 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-28 00:41:15.274099 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-28 00:41:15.274106 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-28 00:41:15.274113 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-28 00:41:15.274119 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-28 00:41:15.274125 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-28 00:41:15.274131 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-28 00:41:15.274137 | orchestrator | 2026-03-28 00:41:15.274144 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:41:15.274151 | orchestrator | Saturday 28 March 2026 00:41:08 +0000 (0:00:00.373) 0:00:16.273 ******** 2026-03-28 00:41:15.274157 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:41:15.274165 | orchestrator | 2026-03-28 00:41:15.274171 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:41:15.274178 | orchestrator | Saturday 28 March 2026 00:41:08 +0000 (0:00:00.216) 0:00:16.490 ******** 2026-03-28 00:41:15.274199 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:41:15.274206 | orchestrator | 2026-03-28 00:41:15.274212 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:41:15.274218 | orchestrator | Saturday 28 March 2026 00:41:08 +0000 (0:00:00.211) 0:00:16.701 ******** 2026-03-28 00:41:15.274224 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:41:15.274230 | orchestrator | 2026-03-28 00:41:15.274237 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:41:15.274243 | orchestrator | Saturday 28 March 2026 00:41:09 +0000 (0:00:00.197) 0:00:16.899 ******** 2026-03-28 00:41:15.274250 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:41:15.274256 | orchestrator | 2026-03-28 00:41:15.274262 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:41:15.274268 | orchestrator | Saturday 28 March 2026 00:41:09 +0000 (0:00:00.215) 0:00:17.115 ******** 2026-03-28 00:41:15.274274 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:41:15.274280 | orchestrator | 2026-03-28 00:41:15.274287 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:41:15.274293 | orchestrator | Saturday 28 March 2026 00:41:09 +0000 (0:00:00.210) 0:00:17.325 ******** 2026-03-28 00:41:15.274299 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:41:15.274305 | orchestrator | 2026-03-28 00:41:15.274311 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:41:15.274318 | orchestrator | Saturday 28 March 2026 00:41:10 +0000 (0:00:00.603) 0:00:17.929 ******** 2026-03-28 00:41:15.274324 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:41:15.274330 | orchestrator | 2026-03-28 00:41:15.274337 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:41:15.274343 | orchestrator | Saturday 28 March 2026 00:41:10 +0000 (0:00:00.186) 0:00:18.115 ******** 2026-03-28 00:41:15.274349 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:41:15.274356 | orchestrator | 2026-03-28 00:41:15.274363 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:41:15.274370 | orchestrator | Saturday 28 March 2026 00:41:10 +0000 (0:00:00.179) 0:00:18.295 ******** 2026-03-28 00:41:15.274377 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_eb8cdf5a-61ca-4829-8f5a-ada391b02d40) 2026-03-28 00:41:15.274384 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_eb8cdf5a-61ca-4829-8f5a-ada391b02d40) 2026-03-28 00:41:15.274391 | orchestrator | 2026-03-28 00:41:15.274408 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:41:15.274416 | orchestrator | Saturday 28 March 2026 00:41:10 +0000 (0:00:00.402) 0:00:18.697 ******** 2026-03-28 00:41:15.274423 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_0a0aea56-4050-4691-823a-d862fa48a59f) 2026-03-28 00:41:15.274430 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_0a0aea56-4050-4691-823a-d862fa48a59f) 2026-03-28 00:41:15.274437 | orchestrator | 2026-03-28 00:41:15.274443 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:41:15.274450 | orchestrator | Saturday 28 March 2026 00:41:11 +0000 (0:00:00.406) 0:00:19.103 ******** 2026-03-28 00:41:15.274457 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_c165f4e4-c145-4cd5-8a4b-fe75c460abfb) 2026-03-28 00:41:15.274464 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_c165f4e4-c145-4cd5-8a4b-fe75c460abfb) 2026-03-28 00:41:15.274471 | orchestrator | 2026-03-28 00:41:15.274478 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:41:15.274497 | orchestrator | Saturday 28 March 2026 00:41:11 +0000 (0:00:00.422) 0:00:19.525 ******** 2026-03-28 00:41:15.274504 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_edfefcfb-f0d2-43d0-b5b0-353b223cd811) 2026-03-28 00:41:15.274511 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_edfefcfb-f0d2-43d0-b5b0-353b223cd811) 2026-03-28 00:41:15.274518 | orchestrator | 2026-03-28 00:41:15.274533 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:41:15.274541 | orchestrator | Saturday 28 March 2026 00:41:12 +0000 (0:00:00.400) 0:00:19.926 ******** 2026-03-28 00:41:15.274549 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-28 00:41:15.274557 | orchestrator | 2026-03-28 00:41:15.274565 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:41:15.274573 | orchestrator | Saturday 28 March 2026 00:41:12 +0000 (0:00:00.316) 0:00:20.242 ******** 2026-03-28 00:41:15.274581 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-28 00:41:15.274589 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-28 00:41:15.274597 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-28 00:41:15.274619 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-28 00:41:15.274626 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-28 00:41:15.274632 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-28 00:41:15.274639 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-28 00:41:15.274645 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-28 00:41:15.274651 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-28 00:41:15.274658 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-28 00:41:15.274664 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-28 00:41:15.274670 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-28 00:41:15.274676 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-28 00:41:15.274682 | orchestrator | 2026-03-28 00:41:15.274688 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:41:15.274695 | orchestrator | Saturday 28 March 2026 00:41:12 +0000 (0:00:00.335) 0:00:20.577 ******** 2026-03-28 00:41:15.274701 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:41:15.274707 | orchestrator | 2026-03-28 00:41:15.274713 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:41:15.274720 | orchestrator | Saturday 28 March 2026 00:41:12 +0000 (0:00:00.219) 0:00:20.796 ******** 2026-03-28 00:41:15.274726 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:41:15.274733 | orchestrator | 2026-03-28 00:41:15.274741 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:41:15.274748 | orchestrator | Saturday 28 March 2026 00:41:13 +0000 (0:00:00.551) 0:00:21.348 ******** 2026-03-28 00:41:15.274754 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:41:15.274761 | orchestrator | 2026-03-28 00:41:15.274768 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:41:15.274774 | orchestrator | Saturday 28 March 2026 00:41:13 +0000 (0:00:00.192) 0:00:21.541 ******** 2026-03-28 00:41:15.274780 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:41:15.274787 | orchestrator | 2026-03-28 00:41:15.274793 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:41:15.274799 | orchestrator | Saturday 28 March 2026 00:41:13 +0000 (0:00:00.181) 0:00:21.723 ******** 2026-03-28 00:41:15.274806 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:41:15.274812 | orchestrator | 2026-03-28 00:41:15.274818 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:41:15.274825 | orchestrator | Saturday 28 March 2026 00:41:14 +0000 (0:00:00.179) 0:00:21.902 ******** 2026-03-28 00:41:15.274831 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:41:15.274845 | orchestrator | 2026-03-28 00:41:15.274856 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:41:15.274862 | orchestrator | Saturday 28 March 2026 00:41:14 +0000 (0:00:00.175) 0:00:22.078 ******** 2026-03-28 00:41:15.274869 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:41:15.274875 | orchestrator | 2026-03-28 00:41:15.274882 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:41:15.274889 | orchestrator | Saturday 28 March 2026 00:41:14 +0000 (0:00:00.159) 0:00:22.238 ******** 2026-03-28 00:41:15.274895 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:41:15.274901 | orchestrator | 2026-03-28 00:41:15.274907 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:41:15.274914 | orchestrator | Saturday 28 March 2026 00:41:14 +0000 (0:00:00.167) 0:00:22.405 ******** 2026-03-28 00:41:15.274920 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-28 00:41:15.274927 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-28 00:41:15.274934 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-28 00:41:15.274940 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-28 00:41:15.274946 | orchestrator | 2026-03-28 00:41:15.274952 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:41:15.274959 | orchestrator | Saturday 28 March 2026 00:41:15 +0000 (0:00:00.633) 0:00:23.038 ******** 2026-03-28 00:41:15.274965 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:41:21.816126 | orchestrator | 2026-03-28 00:41:21.816219 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:41:21.816234 | orchestrator | Saturday 28 March 2026 00:41:15 +0000 (0:00:00.214) 0:00:23.253 ******** 2026-03-28 00:41:21.816262 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:41:21.816284 | orchestrator | 2026-03-28 00:41:21.816295 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:41:21.816306 | orchestrator | Saturday 28 March 2026 00:41:15 +0000 (0:00:00.188) 0:00:23.441 ******** 2026-03-28 00:41:21.816317 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:41:21.816328 | orchestrator | 2026-03-28 00:41:21.816338 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:41:21.816349 | orchestrator | Saturday 28 March 2026 00:41:15 +0000 (0:00:00.183) 0:00:23.625 ******** 2026-03-28 00:41:21.816360 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:41:21.816371 | orchestrator | 2026-03-28 00:41:21.816381 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-28 00:41:21.816392 | orchestrator | Saturday 28 March 2026 00:41:15 +0000 (0:00:00.165) 0:00:23.790 ******** 2026-03-28 00:41:21.816403 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-03-28 00:41:21.816414 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-03-28 00:41:21.816425 | orchestrator | 2026-03-28 00:41:21.816435 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-28 00:41:21.816446 | orchestrator | Saturday 28 March 2026 00:41:16 +0000 (0:00:00.301) 0:00:24.091 ******** 2026-03-28 00:41:21.816457 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:41:21.816467 | orchestrator | 2026-03-28 00:41:21.816478 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-28 00:41:21.816489 | orchestrator | Saturday 28 March 2026 00:41:16 +0000 (0:00:00.230) 0:00:24.322 ******** 2026-03-28 00:41:21.816499 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:41:21.816510 | orchestrator | 2026-03-28 00:41:21.816521 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-28 00:41:21.816531 | orchestrator | Saturday 28 March 2026 00:41:16 +0000 (0:00:00.130) 0:00:24.453 ******** 2026-03-28 00:41:21.816542 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:41:21.816553 | orchestrator | 2026-03-28 00:41:21.816563 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-28 00:41:21.816574 | orchestrator | Saturday 28 March 2026 00:41:16 +0000 (0:00:00.119) 0:00:24.572 ******** 2026-03-28 00:41:21.816630 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:41:21.816644 | orchestrator | 2026-03-28 00:41:21.816655 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-28 00:41:21.816666 | orchestrator | Saturday 28 March 2026 00:41:16 +0000 (0:00:00.128) 0:00:24.700 ******** 2026-03-28 00:41:21.816678 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '95774a3e-10f2-5c5c-866d-eaa2f6123896'}}) 2026-03-28 00:41:21.816691 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6126976c-050b-5515-8c81-fb3ee245975b'}}) 2026-03-28 00:41:21.816703 | orchestrator | 2026-03-28 00:41:21.816715 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-28 00:41:21.816728 | orchestrator | Saturday 28 March 2026 00:41:16 +0000 (0:00:00.164) 0:00:24.865 ******** 2026-03-28 00:41:21.816741 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '95774a3e-10f2-5c5c-866d-eaa2f6123896'}})  2026-03-28 00:41:21.816755 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6126976c-050b-5515-8c81-fb3ee245975b'}})  2026-03-28 00:41:21.816767 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:41:21.816780 | orchestrator | 2026-03-28 00:41:21.816792 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-28 00:41:21.816804 | orchestrator | Saturday 28 March 2026 00:41:17 +0000 (0:00:00.146) 0:00:25.011 ******** 2026-03-28 00:41:21.816816 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '95774a3e-10f2-5c5c-866d-eaa2f6123896'}})  2026-03-28 00:41:21.816828 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6126976c-050b-5515-8c81-fb3ee245975b'}})  2026-03-28 00:41:21.816840 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:41:21.816853 | orchestrator | 2026-03-28 00:41:21.816866 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-28 00:41:21.816878 | orchestrator | Saturday 28 March 2026 00:41:17 +0000 (0:00:00.145) 0:00:25.156 ******** 2026-03-28 00:41:21.816891 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '95774a3e-10f2-5c5c-866d-eaa2f6123896'}})  2026-03-28 00:41:21.816903 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6126976c-050b-5515-8c81-fb3ee245975b'}})  2026-03-28 00:41:21.816915 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:41:21.816926 | orchestrator | 2026-03-28 00:41:21.816954 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-28 00:41:21.816967 | orchestrator | Saturday 28 March 2026 00:41:17 +0000 (0:00:00.151) 0:00:25.308 ******** 2026-03-28 00:41:21.816979 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:41:21.816991 | orchestrator | 2026-03-28 00:41:21.817003 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-28 00:41:21.817015 | orchestrator | Saturday 28 March 2026 00:41:17 +0000 (0:00:00.144) 0:00:25.452 ******** 2026-03-28 00:41:21.817027 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:41:21.817040 | orchestrator | 2026-03-28 00:41:21.817052 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-28 00:41:21.817063 | orchestrator | Saturday 28 March 2026 00:41:17 +0000 (0:00:00.144) 0:00:25.597 ******** 2026-03-28 00:41:21.817091 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:41:21.817103 | orchestrator | 2026-03-28 00:41:21.817114 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-28 00:41:21.817124 | orchestrator | Saturday 28 March 2026 00:41:17 +0000 (0:00:00.127) 0:00:25.724 ******** 2026-03-28 00:41:21.817135 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:41:21.817145 | orchestrator | 2026-03-28 00:41:21.817156 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-28 00:41:21.817166 | orchestrator | Saturday 28 March 2026 00:41:18 +0000 (0:00:00.384) 0:00:26.108 ******** 2026-03-28 00:41:21.817177 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:41:21.817197 | orchestrator | 2026-03-28 00:41:21.817208 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-28 00:41:21.817219 | orchestrator | Saturday 28 March 2026 00:41:18 +0000 (0:00:00.142) 0:00:26.251 ******** 2026-03-28 00:41:21.817229 | orchestrator | ok: [testbed-node-4] => { 2026-03-28 00:41:21.817240 | orchestrator |  "ceph_osd_devices": { 2026-03-28 00:41:21.817251 | orchestrator |  "sdb": { 2026-03-28 00:41:21.817262 | orchestrator |  "osd_lvm_uuid": "95774a3e-10f2-5c5c-866d-eaa2f6123896" 2026-03-28 00:41:21.817272 | orchestrator |  }, 2026-03-28 00:41:21.817283 | orchestrator |  "sdc": { 2026-03-28 00:41:21.817293 | orchestrator |  "osd_lvm_uuid": "6126976c-050b-5515-8c81-fb3ee245975b" 2026-03-28 00:41:21.817304 | orchestrator |  } 2026-03-28 00:41:21.817314 | orchestrator |  } 2026-03-28 00:41:21.817325 | orchestrator | } 2026-03-28 00:41:21.817336 | orchestrator | 2026-03-28 00:41:21.817347 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-28 00:41:21.817357 | orchestrator | Saturday 28 March 2026 00:41:18 +0000 (0:00:00.185) 0:00:26.436 ******** 2026-03-28 00:41:21.817368 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:41:21.817379 | orchestrator | 2026-03-28 00:41:21.817389 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-28 00:41:21.817400 | orchestrator | Saturday 28 March 2026 00:41:18 +0000 (0:00:00.147) 0:00:26.584 ******** 2026-03-28 00:41:21.817410 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:41:21.817421 | orchestrator | 2026-03-28 00:41:21.817431 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-28 00:41:21.817442 | orchestrator | Saturday 28 March 2026 00:41:18 +0000 (0:00:00.131) 0:00:26.715 ******** 2026-03-28 00:41:21.817452 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:41:21.817463 | orchestrator | 2026-03-28 00:41:21.817473 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-28 00:41:21.817484 | orchestrator | Saturday 28 March 2026 00:41:18 +0000 (0:00:00.140) 0:00:26.856 ******** 2026-03-28 00:41:21.817495 | orchestrator | changed: [testbed-node-4] => { 2026-03-28 00:41:21.817506 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-28 00:41:21.817517 | orchestrator |  "ceph_osd_devices": { 2026-03-28 00:41:21.817527 | orchestrator |  "sdb": { 2026-03-28 00:41:21.817538 | orchestrator |  "osd_lvm_uuid": "95774a3e-10f2-5c5c-866d-eaa2f6123896" 2026-03-28 00:41:21.817549 | orchestrator |  }, 2026-03-28 00:41:21.817560 | orchestrator |  "sdc": { 2026-03-28 00:41:21.817570 | orchestrator |  "osd_lvm_uuid": "6126976c-050b-5515-8c81-fb3ee245975b" 2026-03-28 00:41:21.817581 | orchestrator |  } 2026-03-28 00:41:21.817592 | orchestrator |  }, 2026-03-28 00:41:21.817635 | orchestrator |  "lvm_volumes": [ 2026-03-28 00:41:21.817646 | orchestrator |  { 2026-03-28 00:41:21.817657 | orchestrator |  "data": "osd-block-95774a3e-10f2-5c5c-866d-eaa2f6123896", 2026-03-28 00:41:21.817668 | orchestrator |  "data_vg": "ceph-95774a3e-10f2-5c5c-866d-eaa2f6123896" 2026-03-28 00:41:21.817678 | orchestrator |  }, 2026-03-28 00:41:21.817689 | orchestrator |  { 2026-03-28 00:41:21.817700 | orchestrator |  "data": "osd-block-6126976c-050b-5515-8c81-fb3ee245975b", 2026-03-28 00:41:21.817711 | orchestrator |  "data_vg": "ceph-6126976c-050b-5515-8c81-fb3ee245975b" 2026-03-28 00:41:21.817721 | orchestrator |  } 2026-03-28 00:41:21.817732 | orchestrator |  ] 2026-03-28 00:41:21.817743 | orchestrator |  } 2026-03-28 00:41:21.817753 | orchestrator | } 2026-03-28 00:41:21.817764 | orchestrator | 2026-03-28 00:41:21.817775 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-28 00:41:21.817785 | orchestrator | Saturday 28 March 2026 00:41:19 +0000 (0:00:00.307) 0:00:27.163 ******** 2026-03-28 00:41:21.817796 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-28 00:41:21.817807 | orchestrator | 2026-03-28 00:41:21.817824 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-28 00:41:21.817835 | orchestrator | 2026-03-28 00:41:21.817846 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-28 00:41:21.817857 | orchestrator | Saturday 28 March 2026 00:41:20 +0000 (0:00:01.333) 0:00:28.497 ******** 2026-03-28 00:41:21.817867 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-28 00:41:21.817878 | orchestrator | 2026-03-28 00:41:21.817889 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-28 00:41:21.817899 | orchestrator | Saturday 28 March 2026 00:41:21 +0000 (0:00:00.409) 0:00:28.907 ******** 2026-03-28 00:41:21.817910 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:41:21.817921 | orchestrator | 2026-03-28 00:41:21.817931 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:41:21.817942 | orchestrator | Saturday 28 March 2026 00:41:21 +0000 (0:00:00.519) 0:00:29.427 ******** 2026-03-28 00:41:21.817952 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-28 00:41:21.817963 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-28 00:41:21.817974 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-28 00:41:21.817985 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-28 00:41:21.817995 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-28 00:41:21.818058 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-28 00:41:30.865119 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-28 00:41:30.865230 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-28 00:41:30.865245 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-28 00:41:30.865257 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-28 00:41:30.865314 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-28 00:41:30.865327 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-28 00:41:30.865338 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-28 00:41:30.865349 | orchestrator | 2026-03-28 00:41:30.865361 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:41:30.865373 | orchestrator | Saturday 28 March 2026 00:41:21 +0000 (0:00:00.339) 0:00:29.767 ******** 2026-03-28 00:41:30.865384 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:41:30.865397 | orchestrator | 2026-03-28 00:41:30.865408 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:41:30.865419 | orchestrator | Saturday 28 March 2026 00:41:22 +0000 (0:00:00.176) 0:00:29.944 ******** 2026-03-28 00:41:30.865430 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:41:30.865440 | orchestrator | 2026-03-28 00:41:30.865451 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:41:30.865462 | orchestrator | Saturday 28 March 2026 00:41:22 +0000 (0:00:00.186) 0:00:30.131 ******** 2026-03-28 00:41:30.865473 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:41:30.865483 | orchestrator | 2026-03-28 00:41:30.865494 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:41:30.865505 | orchestrator | Saturday 28 March 2026 00:41:22 +0000 (0:00:00.247) 0:00:30.378 ******** 2026-03-28 00:41:30.865521 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:41:30.865532 | orchestrator | 2026-03-28 00:41:30.865542 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:41:30.865553 | orchestrator | Saturday 28 March 2026 00:41:22 +0000 (0:00:00.266) 0:00:30.644 ******** 2026-03-28 00:41:30.865588 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:41:30.865642 | orchestrator | 2026-03-28 00:41:30.865653 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:41:30.865667 | orchestrator | Saturday 28 March 2026 00:41:22 +0000 (0:00:00.214) 0:00:30.859 ******** 2026-03-28 00:41:30.865680 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:41:30.865692 | orchestrator | 2026-03-28 00:41:30.865704 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:41:30.865715 | orchestrator | Saturday 28 March 2026 00:41:23 +0000 (0:00:00.242) 0:00:31.102 ******** 2026-03-28 00:41:30.865727 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:41:30.865740 | orchestrator | 2026-03-28 00:41:30.865752 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:41:30.865765 | orchestrator | Saturday 28 March 2026 00:41:23 +0000 (0:00:00.163) 0:00:31.265 ******** 2026-03-28 00:41:30.865777 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:41:30.865789 | orchestrator | 2026-03-28 00:41:30.865800 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:41:30.865812 | orchestrator | Saturday 28 March 2026 00:41:23 +0000 (0:00:00.163) 0:00:31.429 ******** 2026-03-28 00:41:30.865824 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_9304b03c-54d0-4df2-b114-2d3d3345c945) 2026-03-28 00:41:30.865838 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_9304b03c-54d0-4df2-b114-2d3d3345c945) 2026-03-28 00:41:30.865850 | orchestrator | 2026-03-28 00:41:30.865862 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:41:30.865874 | orchestrator | Saturday 28 March 2026 00:41:24 +0000 (0:00:00.710) 0:00:32.140 ******** 2026-03-28 00:41:30.865886 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_616f32f6-becb-4ce1-b615-c2a0fbaca869) 2026-03-28 00:41:30.865898 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_616f32f6-becb-4ce1-b615-c2a0fbaca869) 2026-03-28 00:41:30.865910 | orchestrator | 2026-03-28 00:41:30.865923 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:41:30.865936 | orchestrator | Saturday 28 March 2026 00:41:25 +0000 (0:00:01.017) 0:00:33.158 ******** 2026-03-28 00:41:30.865947 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_479351df-b417-42ac-b9cb-d6683c731815) 2026-03-28 00:41:30.865960 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_479351df-b417-42ac-b9cb-d6683c731815) 2026-03-28 00:41:30.865972 | orchestrator | 2026-03-28 00:41:30.865983 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:41:30.865995 | orchestrator | Saturday 28 March 2026 00:41:25 +0000 (0:00:00.454) 0:00:33.613 ******** 2026-03-28 00:41:30.866007 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_3670b387-e30b-4544-bca5-74e83387707d) 2026-03-28 00:41:30.866187 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_3670b387-e30b-4544-bca5-74e83387707d) 2026-03-28 00:41:30.866200 | orchestrator | 2026-03-28 00:41:30.866211 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:41:30.866222 | orchestrator | Saturday 28 March 2026 00:41:26 +0000 (0:00:00.458) 0:00:34.071 ******** 2026-03-28 00:41:30.866232 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-28 00:41:30.866243 | orchestrator | 2026-03-28 00:41:30.866254 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:41:30.866285 | orchestrator | Saturday 28 March 2026 00:41:26 +0000 (0:00:00.445) 0:00:34.517 ******** 2026-03-28 00:41:30.866297 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-28 00:41:30.866307 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-28 00:41:30.866319 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-28 00:41:30.866329 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-28 00:41:30.866352 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-28 00:41:30.866363 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-28 00:41:30.866374 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-28 00:41:30.866384 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-28 00:41:30.866395 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-28 00:41:30.866405 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-28 00:41:30.866416 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-28 00:41:30.866426 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-28 00:41:30.866437 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-28 00:41:30.866447 | orchestrator | 2026-03-28 00:41:30.866458 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:41:30.866469 | orchestrator | Saturday 28 March 2026 00:41:27 +0000 (0:00:00.397) 0:00:34.914 ******** 2026-03-28 00:41:30.866479 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:41:30.866490 | orchestrator | 2026-03-28 00:41:30.866500 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:41:30.866511 | orchestrator | Saturday 28 March 2026 00:41:27 +0000 (0:00:00.205) 0:00:35.120 ******** 2026-03-28 00:41:30.866522 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:41:30.866533 | orchestrator | 2026-03-28 00:41:30.866543 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:41:30.866554 | orchestrator | Saturday 28 March 2026 00:41:27 +0000 (0:00:00.191) 0:00:35.311 ******** 2026-03-28 00:41:30.866565 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:41:30.866575 | orchestrator | 2026-03-28 00:41:30.866586 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:41:30.866639 | orchestrator | Saturday 28 March 2026 00:41:27 +0000 (0:00:00.213) 0:00:35.525 ******** 2026-03-28 00:41:30.866658 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:41:30.866678 | orchestrator | 2026-03-28 00:41:30.866696 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:41:30.866711 | orchestrator | Saturday 28 March 2026 00:41:27 +0000 (0:00:00.218) 0:00:35.744 ******** 2026-03-28 00:41:30.866721 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:41:30.866732 | orchestrator | 2026-03-28 00:41:30.866743 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:41:30.866753 | orchestrator | Saturday 28 March 2026 00:41:28 +0000 (0:00:00.229) 0:00:35.973 ******** 2026-03-28 00:41:30.866764 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:41:30.866774 | orchestrator | 2026-03-28 00:41:30.866785 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:41:30.866796 | orchestrator | Saturday 28 March 2026 00:41:28 +0000 (0:00:00.789) 0:00:36.764 ******** 2026-03-28 00:41:30.866806 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:41:30.866817 | orchestrator | 2026-03-28 00:41:30.866827 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:41:30.866838 | orchestrator | Saturday 28 March 2026 00:41:29 +0000 (0:00:00.257) 0:00:37.021 ******** 2026-03-28 00:41:30.866848 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:41:30.866859 | orchestrator | 2026-03-28 00:41:30.866870 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:41:30.866880 | orchestrator | Saturday 28 March 2026 00:41:29 +0000 (0:00:00.197) 0:00:37.219 ******** 2026-03-28 00:41:30.866891 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-28 00:41:30.866911 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-28 00:41:30.866922 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-28 00:41:30.866933 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-28 00:41:30.866944 | orchestrator | 2026-03-28 00:41:30.866954 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:41:30.866965 | orchestrator | Saturday 28 March 2026 00:41:30 +0000 (0:00:00.675) 0:00:37.894 ******** 2026-03-28 00:41:30.866976 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:41:30.866986 | orchestrator | 2026-03-28 00:41:30.866997 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:41:30.867008 | orchestrator | Saturday 28 March 2026 00:41:30 +0000 (0:00:00.203) 0:00:38.097 ******** 2026-03-28 00:41:30.867018 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:41:30.867029 | orchestrator | 2026-03-28 00:41:30.867040 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:41:30.867050 | orchestrator | Saturday 28 March 2026 00:41:30 +0000 (0:00:00.211) 0:00:38.308 ******** 2026-03-28 00:41:30.867061 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:41:30.867071 | orchestrator | 2026-03-28 00:41:30.867082 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:41:30.867093 | orchestrator | Saturday 28 March 2026 00:41:30 +0000 (0:00:00.206) 0:00:38.515 ******** 2026-03-28 00:41:30.867103 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:41:30.867114 | orchestrator | 2026-03-28 00:41:30.867132 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-28 00:41:35.404706 | orchestrator | Saturday 28 March 2026 00:41:30 +0000 (0:00:00.227) 0:00:38.742 ******** 2026-03-28 00:41:35.404817 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-03-28 00:41:35.404832 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-03-28 00:41:35.404842 | orchestrator | 2026-03-28 00:41:35.404853 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-28 00:41:35.404863 | orchestrator | Saturday 28 March 2026 00:41:31 +0000 (0:00:00.199) 0:00:38.942 ******** 2026-03-28 00:41:35.404872 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:41:35.404882 | orchestrator | 2026-03-28 00:41:35.404893 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-28 00:41:35.404902 | orchestrator | Saturday 28 March 2026 00:41:31 +0000 (0:00:00.142) 0:00:39.085 ******** 2026-03-28 00:41:35.404912 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:41:35.404921 | orchestrator | 2026-03-28 00:41:35.404931 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-28 00:41:35.404941 | orchestrator | Saturday 28 March 2026 00:41:31 +0000 (0:00:00.120) 0:00:39.205 ******** 2026-03-28 00:41:35.404950 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:41:35.404959 | orchestrator | 2026-03-28 00:41:35.404970 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-28 00:41:35.404979 | orchestrator | Saturday 28 March 2026 00:41:31 +0000 (0:00:00.138) 0:00:39.344 ******** 2026-03-28 00:41:35.404989 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:41:35.404999 | orchestrator | 2026-03-28 00:41:35.405009 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-28 00:41:35.405018 | orchestrator | Saturday 28 March 2026 00:41:31 +0000 (0:00:00.348) 0:00:39.692 ******** 2026-03-28 00:41:35.405028 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a9825c53-ea63-5cae-a5f7-e494f125bb8e'}}) 2026-03-28 00:41:35.405038 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8fa92e37-9e8f-5bc1-86de-5e52e5346f3d'}}) 2026-03-28 00:41:35.405047 | orchestrator | 2026-03-28 00:41:35.405057 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-28 00:41:35.405067 | orchestrator | Saturday 28 March 2026 00:41:32 +0000 (0:00:00.206) 0:00:39.898 ******** 2026-03-28 00:41:35.405077 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a9825c53-ea63-5cae-a5f7-e494f125bb8e'}})  2026-03-28 00:41:35.405113 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8fa92e37-9e8f-5bc1-86de-5e52e5346f3d'}})  2026-03-28 00:41:35.405124 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:41:35.405133 | orchestrator | 2026-03-28 00:41:35.405143 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-28 00:41:35.405153 | orchestrator | Saturday 28 March 2026 00:41:32 +0000 (0:00:00.162) 0:00:40.061 ******** 2026-03-28 00:41:35.405162 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a9825c53-ea63-5cae-a5f7-e494f125bb8e'}})  2026-03-28 00:41:35.405172 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8fa92e37-9e8f-5bc1-86de-5e52e5346f3d'}})  2026-03-28 00:41:35.405181 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:41:35.405191 | orchestrator | 2026-03-28 00:41:35.405200 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-28 00:41:35.405210 | orchestrator | Saturday 28 March 2026 00:41:32 +0000 (0:00:00.168) 0:00:40.229 ******** 2026-03-28 00:41:35.405220 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a9825c53-ea63-5cae-a5f7-e494f125bb8e'}})  2026-03-28 00:41:35.405231 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8fa92e37-9e8f-5bc1-86de-5e52e5346f3d'}})  2026-03-28 00:41:35.405242 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:41:35.405254 | orchestrator | 2026-03-28 00:41:35.405265 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-28 00:41:35.405276 | orchestrator | Saturday 28 March 2026 00:41:32 +0000 (0:00:00.183) 0:00:40.413 ******** 2026-03-28 00:41:35.405287 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:41:35.405297 | orchestrator | 2026-03-28 00:41:35.405308 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-28 00:41:35.405319 | orchestrator | Saturday 28 March 2026 00:41:32 +0000 (0:00:00.144) 0:00:40.557 ******** 2026-03-28 00:41:35.405330 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:41:35.405340 | orchestrator | 2026-03-28 00:41:35.405351 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-28 00:41:35.405362 | orchestrator | Saturday 28 March 2026 00:41:32 +0000 (0:00:00.175) 0:00:40.733 ******** 2026-03-28 00:41:35.405373 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:41:35.405384 | orchestrator | 2026-03-28 00:41:35.405394 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-28 00:41:35.405405 | orchestrator | Saturday 28 March 2026 00:41:33 +0000 (0:00:00.153) 0:00:40.887 ******** 2026-03-28 00:41:35.405416 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:41:35.405427 | orchestrator | 2026-03-28 00:41:35.405438 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-28 00:41:35.405449 | orchestrator | Saturday 28 March 2026 00:41:33 +0000 (0:00:00.173) 0:00:41.061 ******** 2026-03-28 00:41:35.405460 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:41:35.405470 | orchestrator | 2026-03-28 00:41:35.405481 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-28 00:41:35.405491 | orchestrator | Saturday 28 March 2026 00:41:33 +0000 (0:00:00.152) 0:00:41.214 ******** 2026-03-28 00:41:35.405502 | orchestrator | ok: [testbed-node-5] => { 2026-03-28 00:41:35.405513 | orchestrator |  "ceph_osd_devices": { 2026-03-28 00:41:35.405524 | orchestrator |  "sdb": { 2026-03-28 00:41:35.405552 | orchestrator |  "osd_lvm_uuid": "a9825c53-ea63-5cae-a5f7-e494f125bb8e" 2026-03-28 00:41:35.405564 | orchestrator |  }, 2026-03-28 00:41:35.405575 | orchestrator |  "sdc": { 2026-03-28 00:41:35.405663 | orchestrator |  "osd_lvm_uuid": "8fa92e37-9e8f-5bc1-86de-5e52e5346f3d" 2026-03-28 00:41:35.405676 | orchestrator |  } 2026-03-28 00:41:35.405686 | orchestrator |  } 2026-03-28 00:41:35.405696 | orchestrator | } 2026-03-28 00:41:35.405706 | orchestrator | 2026-03-28 00:41:35.405725 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-28 00:41:35.405734 | orchestrator | Saturday 28 March 2026 00:41:33 +0000 (0:00:00.145) 0:00:41.359 ******** 2026-03-28 00:41:35.405744 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:41:35.405753 | orchestrator | 2026-03-28 00:41:35.405763 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-28 00:41:35.405772 | orchestrator | Saturday 28 March 2026 00:41:33 +0000 (0:00:00.120) 0:00:41.480 ******** 2026-03-28 00:41:35.405782 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:41:35.405791 | orchestrator | 2026-03-28 00:41:35.405801 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-28 00:41:35.405810 | orchestrator | Saturday 28 March 2026 00:41:33 +0000 (0:00:00.354) 0:00:41.834 ******** 2026-03-28 00:41:35.405820 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:41:35.405829 | orchestrator | 2026-03-28 00:41:35.405839 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-28 00:41:35.405848 | orchestrator | Saturday 28 March 2026 00:41:34 +0000 (0:00:00.146) 0:00:41.981 ******** 2026-03-28 00:41:35.405858 | orchestrator | changed: [testbed-node-5] => { 2026-03-28 00:41:35.405868 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-28 00:41:35.405877 | orchestrator |  "ceph_osd_devices": { 2026-03-28 00:41:35.405887 | orchestrator |  "sdb": { 2026-03-28 00:41:35.405897 | orchestrator |  "osd_lvm_uuid": "a9825c53-ea63-5cae-a5f7-e494f125bb8e" 2026-03-28 00:41:35.405906 | orchestrator |  }, 2026-03-28 00:41:35.405916 | orchestrator |  "sdc": { 2026-03-28 00:41:35.405930 | orchestrator |  "osd_lvm_uuid": "8fa92e37-9e8f-5bc1-86de-5e52e5346f3d" 2026-03-28 00:41:35.405940 | orchestrator |  } 2026-03-28 00:41:35.405949 | orchestrator |  }, 2026-03-28 00:41:35.405959 | orchestrator |  "lvm_volumes": [ 2026-03-28 00:41:35.405969 | orchestrator |  { 2026-03-28 00:41:35.405979 | orchestrator |  "data": "osd-block-a9825c53-ea63-5cae-a5f7-e494f125bb8e", 2026-03-28 00:41:35.405988 | orchestrator |  "data_vg": "ceph-a9825c53-ea63-5cae-a5f7-e494f125bb8e" 2026-03-28 00:41:35.405998 | orchestrator |  }, 2026-03-28 00:41:35.406012 | orchestrator |  { 2026-03-28 00:41:35.406094 | orchestrator |  "data": "osd-block-8fa92e37-9e8f-5bc1-86de-5e52e5346f3d", 2026-03-28 00:41:35.406105 | orchestrator |  "data_vg": "ceph-8fa92e37-9e8f-5bc1-86de-5e52e5346f3d" 2026-03-28 00:41:35.406114 | orchestrator |  } 2026-03-28 00:41:35.406124 | orchestrator |  ] 2026-03-28 00:41:35.406134 | orchestrator |  } 2026-03-28 00:41:35.406143 | orchestrator | } 2026-03-28 00:41:35.406152 | orchestrator | 2026-03-28 00:41:35.406162 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-28 00:41:35.406171 | orchestrator | Saturday 28 March 2026 00:41:34 +0000 (0:00:00.264) 0:00:42.245 ******** 2026-03-28 00:41:35.406181 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-28 00:41:35.406190 | orchestrator | 2026-03-28 00:41:35.406200 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:41:35.406209 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-28 00:41:35.406220 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-28 00:41:35.406230 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-28 00:41:35.406239 | orchestrator | 2026-03-28 00:41:35.406249 | orchestrator | 2026-03-28 00:41:35.406258 | orchestrator | 2026-03-28 00:41:35.406268 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:41:35.406277 | orchestrator | Saturday 28 March 2026 00:41:35 +0000 (0:00:01.009) 0:00:43.255 ******** 2026-03-28 00:41:35.406295 | orchestrator | =============================================================================== 2026-03-28 00:41:35.406304 | orchestrator | Write configuration file ------------------------------------------------ 4.62s 2026-03-28 00:41:35.406313 | orchestrator | Add known partitions to the list of available block devices ------------- 1.13s 2026-03-28 00:41:35.406323 | orchestrator | Add known links to the list of available block devices ------------------ 1.09s 2026-03-28 00:41:35.406332 | orchestrator | Add known partitions to the list of available block devices ------------- 1.05s 2026-03-28 00:41:35.406341 | orchestrator | Add known links to the list of available block devices ------------------ 1.02s 2026-03-28 00:41:35.406351 | orchestrator | Get initial list of available block devices ----------------------------- 0.97s 2026-03-28 00:41:35.406360 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.91s 2026-03-28 00:41:35.406370 | orchestrator | Add known links to the list of available block devices ------------------ 0.82s 2026-03-28 00:41:35.406379 | orchestrator | Print configuration data ------------------------------------------------ 0.80s 2026-03-28 00:41:35.406388 | orchestrator | Add known partitions to the list of available block devices ------------- 0.79s 2026-03-28 00:41:35.406398 | orchestrator | Set WAL devices config data --------------------------------------------- 0.72s 2026-03-28 00:41:35.406407 | orchestrator | Add known links to the list of available block devices ------------------ 0.71s 2026-03-28 00:41:35.406417 | orchestrator | Generate lvm_volumes structure (block + db + wal) ----------------------- 0.70s 2026-03-28 00:41:35.406435 | orchestrator | Add known partitions to the list of available block devices ------------- 0.67s 2026-03-28 00:41:35.749744 | orchestrator | Add known links to the list of available block devices ------------------ 0.67s 2026-03-28 00:41:35.749845 | orchestrator | Add known links to the list of available block devices ------------------ 0.67s 2026-03-28 00:41:35.749860 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.67s 2026-03-28 00:41:35.749872 | orchestrator | Add known partitions to the list of available block devices ------------- 0.63s 2026-03-28 00:41:35.749883 | orchestrator | Print DB devices -------------------------------------------------------- 0.61s 2026-03-28 00:41:35.749894 | orchestrator | Add known links to the list of available block devices ------------------ 0.60s 2026-03-28 00:41:57.806977 | orchestrator | 2026-03-28 00:41:57 | INFO  | Task fdcf1390-7f76-4681-b914-abfee3cfcc62 (sync inventory) is running in background. Output coming soon. 2026-03-28 00:42:30.108288 | orchestrator | 2026-03-28 00:41:59 | INFO  | Starting group_vars file reorganization 2026-03-28 00:42:30.108342 | orchestrator | 2026-03-28 00:41:59 | INFO  | Moved 0 file(s) to their respective directories 2026-03-28 00:42:30.108349 | orchestrator | 2026-03-28 00:41:59 | INFO  | Group_vars file reorganization completed 2026-03-28 00:42:30.108353 | orchestrator | 2026-03-28 00:42:02 | INFO  | Starting variable preparation from inventory 2026-03-28 00:42:30.108357 | orchestrator | 2026-03-28 00:42:05 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-03-28 00:42:30.108361 | orchestrator | 2026-03-28 00:42:05 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-03-28 00:42:30.108365 | orchestrator | 2026-03-28 00:42:05 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-03-28 00:42:30.108369 | orchestrator | 2026-03-28 00:42:05 | INFO  | 3 file(s) written, 6 host(s) processed 2026-03-28 00:42:30.108373 | orchestrator | 2026-03-28 00:42:05 | INFO  | Variable preparation completed 2026-03-28 00:42:30.108377 | orchestrator | 2026-03-28 00:42:07 | INFO  | Starting inventory overwrite handling 2026-03-28 00:42:30.108381 | orchestrator | 2026-03-28 00:42:07 | INFO  | Handling group overwrites in 99-overwrite 2026-03-28 00:42:30.108385 | orchestrator | 2026-03-28 00:42:07 | INFO  | Removing group frr:children from 60-generic 2026-03-28 00:42:30.108401 | orchestrator | 2026-03-28 00:42:07 | INFO  | Removing group netbird:children from 50-infrastructure 2026-03-28 00:42:30.108405 | orchestrator | 2026-03-28 00:42:07 | INFO  | Removing group ceph-rgw from 50-ceph 2026-03-28 00:42:30.108409 | orchestrator | 2026-03-28 00:42:07 | INFO  | Removing group ceph-mds from 50-ceph 2026-03-28 00:42:30.108412 | orchestrator | 2026-03-28 00:42:07 | INFO  | Handling group overwrites in 20-roles 2026-03-28 00:42:30.108416 | orchestrator | 2026-03-28 00:42:07 | INFO  | Removing group k3s_node from 50-infrastructure 2026-03-28 00:42:30.108420 | orchestrator | 2026-03-28 00:42:07 | INFO  | Removed 5 group(s) in total 2026-03-28 00:42:30.108424 | orchestrator | 2026-03-28 00:42:07 | INFO  | Inventory overwrite handling completed 2026-03-28 00:42:30.108428 | orchestrator | 2026-03-28 00:42:08 | INFO  | Starting merge of inventory files 2026-03-28 00:42:30.108431 | orchestrator | 2026-03-28 00:42:08 | INFO  | Inventory files merged successfully 2026-03-28 00:42:30.108435 | orchestrator | 2026-03-28 00:42:14 | INFO  | Generating minified hosts file 2026-03-28 00:42:30.108439 | orchestrator | 2026-03-28 00:42:15 | INFO  | Successfully wrote minified hosts file to /inventory.merge/hosts-minified.yml 2026-03-28 00:42:30.108443 | orchestrator | 2026-03-28 00:42:15 | INFO  | Successfully wrote fast inventory to /inventory.merge/fast/hosts.json 2026-03-28 00:42:30.108456 | orchestrator | 2026-03-28 00:42:17 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-03-28 00:42:30.108460 | orchestrator | 2026-03-28 00:42:28 | INFO  | Successfully wrote ClusterShell configuration 2026-03-28 00:42:30.108464 | orchestrator | [master 6a343c7] 2026-03-28-00-42 2026-03-28 00:42:30.108469 | orchestrator | 5 files changed, 75 insertions(+), 10 deletions(-) 2026-03-28 00:42:30.108473 | orchestrator | create mode 100644 fast/host_vars/testbed-node-3/ceph-lvm-configuration.yml 2026-03-28 00:42:30.108477 | orchestrator | create mode 100644 fast/host_vars/testbed-node-4/ceph-lvm-configuration.yml 2026-03-28 00:42:30.108481 | orchestrator | create mode 100644 fast/host_vars/testbed-node-5/ceph-lvm-configuration.yml 2026-03-28 00:42:31.562949 | orchestrator | 2026-03-28 00:42:31 | INFO  | Prepare task for execution of ceph-create-lvm-devices. 2026-03-28 00:42:31.653990 | orchestrator | 2026-03-28 00:42:31 | INFO  | Task 0ae0a9fe-6461-4df4-8d82-6ca268473bf3 (ceph-create-lvm-devices) was prepared for execution. 2026-03-28 00:42:31.654166 | orchestrator | 2026-03-28 00:42:31 | INFO  | It takes a moment until task 0ae0a9fe-6461-4df4-8d82-6ca268473bf3 (ceph-create-lvm-devices) has been started and output is visible here. 2026-03-28 00:42:45.007439 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-28 00:42:45.007618 | orchestrator | 2.16.14 2026-03-28 00:42:45.007632 | orchestrator | 2026-03-28 00:42:45.007640 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-28 00:42:45.007648 | orchestrator | 2026-03-28 00:42:45.007655 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-28 00:42:45.007661 | orchestrator | Saturday 28 March 2026 00:42:36 +0000 (0:00:00.279) 0:00:00.279 ******** 2026-03-28 00:42:45.007668 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-28 00:42:45.007786 | orchestrator | 2026-03-28 00:42:45.007796 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-28 00:42:45.007802 | orchestrator | Saturday 28 March 2026 00:42:36 +0000 (0:00:00.245) 0:00:00.524 ******** 2026-03-28 00:42:45.007809 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:42:45.007815 | orchestrator | 2026-03-28 00:42:45.007822 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:42:45.007829 | orchestrator | Saturday 28 March 2026 00:42:37 +0000 (0:00:00.243) 0:00:00.768 ******** 2026-03-28 00:42:45.007854 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-28 00:42:45.007861 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-28 00:42:45.007867 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-28 00:42:45.007873 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-28 00:42:45.007904 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-28 00:42:45.007927 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-28 00:42:45.007941 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-28 00:42:45.007947 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-28 00:42:45.007953 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-28 00:42:45.007959 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-28 00:42:45.007965 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-28 00:42:45.007971 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-28 00:42:45.007977 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-28 00:42:45.008189 | orchestrator | 2026-03-28 00:42:45.008199 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:42:45.008206 | orchestrator | Saturday 28 March 2026 00:42:37 +0000 (0:00:00.455) 0:00:01.223 ******** 2026-03-28 00:42:45.008214 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:45.008221 | orchestrator | 2026-03-28 00:42:45.008228 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:42:45.008236 | orchestrator | Saturday 28 March 2026 00:42:38 +0000 (0:00:00.402) 0:00:01.625 ******** 2026-03-28 00:42:45.008243 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:45.008257 | orchestrator | 2026-03-28 00:42:45.008265 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:42:45.008272 | orchestrator | Saturday 28 March 2026 00:42:38 +0000 (0:00:00.193) 0:00:01.819 ******** 2026-03-28 00:42:45.008279 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:45.008306 | orchestrator | 2026-03-28 00:42:45.008314 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:42:45.008321 | orchestrator | Saturday 28 March 2026 00:42:38 +0000 (0:00:00.206) 0:00:02.025 ******** 2026-03-28 00:42:45.008328 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:45.008335 | orchestrator | 2026-03-28 00:42:45.008343 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:42:45.008350 | orchestrator | Saturday 28 March 2026 00:42:38 +0000 (0:00:00.210) 0:00:02.236 ******** 2026-03-28 00:42:45.008358 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:45.008365 | orchestrator | 2026-03-28 00:42:45.008371 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:42:45.008377 | orchestrator | Saturday 28 March 2026 00:42:38 +0000 (0:00:00.235) 0:00:02.471 ******** 2026-03-28 00:42:45.008402 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:45.008409 | orchestrator | 2026-03-28 00:42:45.008415 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:42:45.008421 | orchestrator | Saturday 28 March 2026 00:42:39 +0000 (0:00:00.261) 0:00:02.733 ******** 2026-03-28 00:42:45.008427 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:45.008453 | orchestrator | 2026-03-28 00:42:45.008460 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:42:45.008466 | orchestrator | Saturday 28 March 2026 00:42:39 +0000 (0:00:00.209) 0:00:02.942 ******** 2026-03-28 00:42:45.008473 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:45.008487 | orchestrator | 2026-03-28 00:42:45.008494 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:42:45.008500 | orchestrator | Saturday 28 March 2026 00:42:39 +0000 (0:00:00.204) 0:00:03.147 ******** 2026-03-28 00:42:45.008506 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_3c51dbd4-3dd9-4220-b480-983204e78537) 2026-03-28 00:42:45.008513 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_3c51dbd4-3dd9-4220-b480-983204e78537) 2026-03-28 00:42:45.008519 | orchestrator | 2026-03-28 00:42:45.008591 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:42:45.008614 | orchestrator | Saturday 28 March 2026 00:42:39 +0000 (0:00:00.451) 0:00:03.598 ******** 2026-03-28 00:42:45.008621 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_78ac07d6-a998-431a-8632-f54c89645a8d) 2026-03-28 00:42:45.008627 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_78ac07d6-a998-431a-8632-f54c89645a8d) 2026-03-28 00:42:45.008633 | orchestrator | 2026-03-28 00:42:45.008639 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:42:45.008646 | orchestrator | Saturday 28 March 2026 00:42:40 +0000 (0:00:00.423) 0:00:04.022 ******** 2026-03-28 00:42:45.008652 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_af575ecf-0cf6-48aa-a1b6-43f16240ccad) 2026-03-28 00:42:45.008658 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_af575ecf-0cf6-48aa-a1b6-43f16240ccad) 2026-03-28 00:42:45.008664 | orchestrator | 2026-03-28 00:42:45.008695 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:42:45.008703 | orchestrator | Saturday 28 March 2026 00:42:41 +0000 (0:00:00.828) 0:00:04.851 ******** 2026-03-28 00:42:45.008710 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_d2c41d1e-c1aa-422a-bc56-ab0bbd118726) 2026-03-28 00:42:45.008716 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_d2c41d1e-c1aa-422a-bc56-ab0bbd118726) 2026-03-28 00:42:45.008722 | orchestrator | 2026-03-28 00:42:45.008728 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:42:45.008734 | orchestrator | Saturday 28 March 2026 00:42:41 +0000 (0:00:00.680) 0:00:05.531 ******** 2026-03-28 00:42:45.008740 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-28 00:42:45.008746 | orchestrator | 2026-03-28 00:42:45.008752 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:42:45.008759 | orchestrator | Saturday 28 March 2026 00:42:42 +0000 (0:00:00.847) 0:00:06.378 ******** 2026-03-28 00:42:45.008765 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-28 00:42:45.008772 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-28 00:42:45.008778 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-28 00:42:45.008784 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-28 00:42:45.008790 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-28 00:42:45.008817 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-28 00:42:45.008823 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-28 00:42:45.008829 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-28 00:42:45.008835 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-28 00:42:45.008841 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-28 00:42:45.008847 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-28 00:42:45.008853 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-28 00:42:45.008866 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-28 00:42:45.008872 | orchestrator | 2026-03-28 00:42:45.008878 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:42:45.008884 | orchestrator | Saturday 28 March 2026 00:42:43 +0000 (0:00:00.442) 0:00:06.821 ******** 2026-03-28 00:42:45.008890 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:45.008896 | orchestrator | 2026-03-28 00:42:45.008959 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:42:45.008966 | orchestrator | Saturday 28 March 2026 00:42:43 +0000 (0:00:00.288) 0:00:07.109 ******** 2026-03-28 00:42:45.008973 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:45.008979 | orchestrator | 2026-03-28 00:42:45.008994 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:42:45.009000 | orchestrator | Saturday 28 March 2026 00:42:43 +0000 (0:00:00.235) 0:00:07.344 ******** 2026-03-28 00:42:45.009007 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:45.009013 | orchestrator | 2026-03-28 00:42:45.009019 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:42:45.009048 | orchestrator | Saturday 28 March 2026 00:42:44 +0000 (0:00:00.281) 0:00:07.626 ******** 2026-03-28 00:42:45.009089 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:45.009096 | orchestrator | 2026-03-28 00:42:45.009102 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:42:45.009108 | orchestrator | Saturday 28 March 2026 00:42:44 +0000 (0:00:00.251) 0:00:07.877 ******** 2026-03-28 00:42:45.009115 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:45.009404 | orchestrator | 2026-03-28 00:42:45.009413 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:42:45.009420 | orchestrator | Saturday 28 March 2026 00:42:44 +0000 (0:00:00.235) 0:00:08.113 ******** 2026-03-28 00:42:45.009426 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:45.009432 | orchestrator | 2026-03-28 00:42:45.009439 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:42:45.009445 | orchestrator | Saturday 28 March 2026 00:42:44 +0000 (0:00:00.231) 0:00:08.345 ******** 2026-03-28 00:42:45.009451 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:45.009457 | orchestrator | 2026-03-28 00:42:45.009471 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:42:53.242637 | orchestrator | Saturday 28 March 2026 00:42:45 +0000 (0:00:00.273) 0:00:08.618 ******** 2026-03-28 00:42:53.242718 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:53.242726 | orchestrator | 2026-03-28 00:42:53.242733 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:42:53.242739 | orchestrator | Saturday 28 March 2026 00:42:45 +0000 (0:00:00.217) 0:00:08.836 ******** 2026-03-28 00:42:53.242744 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-28 00:42:53.242750 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-28 00:42:53.242756 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-28 00:42:53.242761 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-28 00:42:53.242766 | orchestrator | 2026-03-28 00:42:53.242772 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:42:53.242777 | orchestrator | Saturday 28 March 2026 00:42:46 +0000 (0:00:01.110) 0:00:09.947 ******** 2026-03-28 00:42:53.242782 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:53.242787 | orchestrator | 2026-03-28 00:42:53.242808 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:42:53.242813 | orchestrator | Saturday 28 March 2026 00:42:46 +0000 (0:00:00.200) 0:00:10.148 ******** 2026-03-28 00:42:53.242818 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:53.242823 | orchestrator | 2026-03-28 00:42:53.242829 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:42:53.242851 | orchestrator | Saturday 28 March 2026 00:42:46 +0000 (0:00:00.198) 0:00:10.346 ******** 2026-03-28 00:42:53.242857 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:53.242862 | orchestrator | 2026-03-28 00:42:53.242867 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:42:53.242872 | orchestrator | Saturday 28 March 2026 00:42:46 +0000 (0:00:00.219) 0:00:10.565 ******** 2026-03-28 00:42:53.242877 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:53.242882 | orchestrator | 2026-03-28 00:42:53.242898 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-28 00:42:53.242903 | orchestrator | Saturday 28 March 2026 00:42:47 +0000 (0:00:00.273) 0:00:10.838 ******** 2026-03-28 00:42:53.242909 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:53.242914 | orchestrator | 2026-03-28 00:42:53.242919 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-28 00:42:53.242940 | orchestrator | Saturday 28 March 2026 00:42:47 +0000 (0:00:00.181) 0:00:11.020 ******** 2026-03-28 00:42:53.242945 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3eb28a65-49e9-527a-93b6-39f945444b2a'}}) 2026-03-28 00:42:53.242951 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8c246942-827f-54a7-8a08-735105fd2fd0'}}) 2026-03-28 00:42:53.242957 | orchestrator | 2026-03-28 00:42:53.242962 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-28 00:42:53.242967 | orchestrator | Saturday 28 March 2026 00:42:47 +0000 (0:00:00.236) 0:00:11.257 ******** 2026-03-28 00:42:53.242974 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-3eb28a65-49e9-527a-93b6-39f945444b2a', 'data_vg': 'ceph-3eb28a65-49e9-527a-93b6-39f945444b2a'}) 2026-03-28 00:42:53.242984 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-8c246942-827f-54a7-8a08-735105fd2fd0', 'data_vg': 'ceph-8c246942-827f-54a7-8a08-735105fd2fd0'}) 2026-03-28 00:42:53.242993 | orchestrator | 2026-03-28 00:42:53.243001 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-28 00:42:53.243009 | orchestrator | Saturday 28 March 2026 00:42:49 +0000 (0:00:01.920) 0:00:13.178 ******** 2026-03-28 00:42:53.243018 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3eb28a65-49e9-527a-93b6-39f945444b2a', 'data_vg': 'ceph-3eb28a65-49e9-527a-93b6-39f945444b2a'})  2026-03-28 00:42:53.243028 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8c246942-827f-54a7-8a08-735105fd2fd0', 'data_vg': 'ceph-8c246942-827f-54a7-8a08-735105fd2fd0'})  2026-03-28 00:42:53.243036 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:53.243043 | orchestrator | 2026-03-28 00:42:53.243050 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-28 00:42:53.243059 | orchestrator | Saturday 28 March 2026 00:42:49 +0000 (0:00:00.162) 0:00:13.340 ******** 2026-03-28 00:42:53.243089 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-3eb28a65-49e9-527a-93b6-39f945444b2a', 'data_vg': 'ceph-3eb28a65-49e9-527a-93b6-39f945444b2a'}) 2026-03-28 00:42:53.243098 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-8c246942-827f-54a7-8a08-735105fd2fd0', 'data_vg': 'ceph-8c246942-827f-54a7-8a08-735105fd2fd0'}) 2026-03-28 00:42:53.243105 | orchestrator | 2026-03-28 00:42:53.243114 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-28 00:42:53.243122 | orchestrator | Saturday 28 March 2026 00:42:51 +0000 (0:00:01.426) 0:00:14.767 ******** 2026-03-28 00:42:53.243131 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3eb28a65-49e9-527a-93b6-39f945444b2a', 'data_vg': 'ceph-3eb28a65-49e9-527a-93b6-39f945444b2a'})  2026-03-28 00:42:53.243140 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8c246942-827f-54a7-8a08-735105fd2fd0', 'data_vg': 'ceph-8c246942-827f-54a7-8a08-735105fd2fd0'})  2026-03-28 00:42:53.243148 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:53.243157 | orchestrator | 2026-03-28 00:42:53.243166 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-28 00:42:53.243185 | orchestrator | Saturday 28 March 2026 00:42:51 +0000 (0:00:00.178) 0:00:14.946 ******** 2026-03-28 00:42:53.243212 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:53.243219 | orchestrator | 2026-03-28 00:42:53.243225 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-28 00:42:53.243231 | orchestrator | Saturday 28 March 2026 00:42:51 +0000 (0:00:00.133) 0:00:15.079 ******** 2026-03-28 00:42:53.243237 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3eb28a65-49e9-527a-93b6-39f945444b2a', 'data_vg': 'ceph-3eb28a65-49e9-527a-93b6-39f945444b2a'})  2026-03-28 00:42:53.243243 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8c246942-827f-54a7-8a08-735105fd2fd0', 'data_vg': 'ceph-8c246942-827f-54a7-8a08-735105fd2fd0'})  2026-03-28 00:42:53.243249 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:53.243254 | orchestrator | 2026-03-28 00:42:53.243260 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-28 00:42:53.243266 | orchestrator | Saturday 28 March 2026 00:42:51 +0000 (0:00:00.399) 0:00:15.479 ******** 2026-03-28 00:42:53.243272 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:53.243277 | orchestrator | 2026-03-28 00:42:53.243283 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-28 00:42:53.243289 | orchestrator | Saturday 28 March 2026 00:42:52 +0000 (0:00:00.155) 0:00:15.634 ******** 2026-03-28 00:42:53.243294 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3eb28a65-49e9-527a-93b6-39f945444b2a', 'data_vg': 'ceph-3eb28a65-49e9-527a-93b6-39f945444b2a'})  2026-03-28 00:42:53.243300 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8c246942-827f-54a7-8a08-735105fd2fd0', 'data_vg': 'ceph-8c246942-827f-54a7-8a08-735105fd2fd0'})  2026-03-28 00:42:53.243306 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:53.243312 | orchestrator | 2026-03-28 00:42:53.243318 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-28 00:42:53.243324 | orchestrator | Saturday 28 March 2026 00:42:52 +0000 (0:00:00.152) 0:00:15.787 ******** 2026-03-28 00:42:53.243330 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:53.243336 | orchestrator | 2026-03-28 00:42:53.243342 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-28 00:42:53.243348 | orchestrator | Saturday 28 March 2026 00:42:52 +0000 (0:00:00.137) 0:00:15.924 ******** 2026-03-28 00:42:53.243354 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3eb28a65-49e9-527a-93b6-39f945444b2a', 'data_vg': 'ceph-3eb28a65-49e9-527a-93b6-39f945444b2a'})  2026-03-28 00:42:53.243360 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8c246942-827f-54a7-8a08-735105fd2fd0', 'data_vg': 'ceph-8c246942-827f-54a7-8a08-735105fd2fd0'})  2026-03-28 00:42:53.243366 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:53.243372 | orchestrator | 2026-03-28 00:42:53.243377 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-28 00:42:53.243383 | orchestrator | Saturday 28 March 2026 00:42:52 +0000 (0:00:00.165) 0:00:16.089 ******** 2026-03-28 00:42:53.243389 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:42:53.243395 | orchestrator | 2026-03-28 00:42:53.243401 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-28 00:42:53.243407 | orchestrator | Saturday 28 March 2026 00:42:52 +0000 (0:00:00.145) 0:00:16.235 ******** 2026-03-28 00:42:53.243412 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3eb28a65-49e9-527a-93b6-39f945444b2a', 'data_vg': 'ceph-3eb28a65-49e9-527a-93b6-39f945444b2a'})  2026-03-28 00:42:53.243418 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8c246942-827f-54a7-8a08-735105fd2fd0', 'data_vg': 'ceph-8c246942-827f-54a7-8a08-735105fd2fd0'})  2026-03-28 00:42:53.243424 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:53.243430 | orchestrator | 2026-03-28 00:42:53.243436 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-28 00:42:53.243446 | orchestrator | Saturday 28 March 2026 00:42:52 +0000 (0:00:00.163) 0:00:16.399 ******** 2026-03-28 00:42:53.243452 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3eb28a65-49e9-527a-93b6-39f945444b2a', 'data_vg': 'ceph-3eb28a65-49e9-527a-93b6-39f945444b2a'})  2026-03-28 00:42:53.243457 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8c246942-827f-54a7-8a08-735105fd2fd0', 'data_vg': 'ceph-8c246942-827f-54a7-8a08-735105fd2fd0'})  2026-03-28 00:42:53.243464 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:53.243469 | orchestrator | 2026-03-28 00:42:53.243487 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-28 00:42:53.243493 | orchestrator | Saturday 28 March 2026 00:42:52 +0000 (0:00:00.157) 0:00:16.557 ******** 2026-03-28 00:42:53.243499 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3eb28a65-49e9-527a-93b6-39f945444b2a', 'data_vg': 'ceph-3eb28a65-49e9-527a-93b6-39f945444b2a'})  2026-03-28 00:42:53.243515 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8c246942-827f-54a7-8a08-735105fd2fd0', 'data_vg': 'ceph-8c246942-827f-54a7-8a08-735105fd2fd0'})  2026-03-28 00:42:53.243521 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:53.243527 | orchestrator | 2026-03-28 00:42:53.243532 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-28 00:42:53.243537 | orchestrator | Saturday 28 March 2026 00:42:53 +0000 (0:00:00.147) 0:00:16.704 ******** 2026-03-28 00:42:53.243548 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:53.243571 | orchestrator | 2026-03-28 00:42:53.243576 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-28 00:42:53.243585 | orchestrator | Saturday 28 March 2026 00:42:53 +0000 (0:00:00.151) 0:00:16.856 ******** 2026-03-28 00:42:59.794931 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:59.795026 | orchestrator | 2026-03-28 00:42:59.795036 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-28 00:42:59.795045 | orchestrator | Saturday 28 March 2026 00:42:53 +0000 (0:00:00.170) 0:00:17.026 ******** 2026-03-28 00:42:59.795052 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:59.795059 | orchestrator | 2026-03-28 00:42:59.795066 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-28 00:42:59.795073 | orchestrator | Saturday 28 March 2026 00:42:53 +0000 (0:00:00.128) 0:00:17.155 ******** 2026-03-28 00:42:59.795080 | orchestrator | ok: [testbed-node-3] => { 2026-03-28 00:42:59.795088 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-28 00:42:59.795095 | orchestrator | } 2026-03-28 00:42:59.795102 | orchestrator | 2026-03-28 00:42:59.795109 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-28 00:42:59.795116 | orchestrator | Saturday 28 March 2026 00:42:54 +0000 (0:00:00.470) 0:00:17.626 ******** 2026-03-28 00:42:59.795123 | orchestrator | ok: [testbed-node-3] => { 2026-03-28 00:42:59.795129 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-28 00:42:59.795136 | orchestrator | } 2026-03-28 00:42:59.795143 | orchestrator | 2026-03-28 00:42:59.795149 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-28 00:42:59.795156 | orchestrator | Saturday 28 March 2026 00:42:54 +0000 (0:00:00.149) 0:00:17.775 ******** 2026-03-28 00:42:59.795162 | orchestrator | ok: [testbed-node-3] => { 2026-03-28 00:42:59.795169 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-28 00:42:59.795176 | orchestrator | } 2026-03-28 00:42:59.795183 | orchestrator | 2026-03-28 00:42:59.795189 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-28 00:42:59.795196 | orchestrator | Saturday 28 March 2026 00:42:54 +0000 (0:00:00.143) 0:00:17.919 ******** 2026-03-28 00:42:59.795203 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:42:59.795209 | orchestrator | 2026-03-28 00:42:59.795228 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-28 00:42:59.795235 | orchestrator | Saturday 28 March 2026 00:42:54 +0000 (0:00:00.660) 0:00:18.580 ******** 2026-03-28 00:42:59.795259 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:42:59.795267 | orchestrator | 2026-03-28 00:42:59.795273 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-28 00:42:59.795280 | orchestrator | Saturday 28 March 2026 00:42:55 +0000 (0:00:00.515) 0:00:19.095 ******** 2026-03-28 00:42:59.795286 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:42:59.795293 | orchestrator | 2026-03-28 00:42:59.795299 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-28 00:42:59.795306 | orchestrator | Saturday 28 March 2026 00:42:55 +0000 (0:00:00.517) 0:00:19.612 ******** 2026-03-28 00:42:59.795312 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:42:59.795319 | orchestrator | 2026-03-28 00:42:59.795325 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-28 00:42:59.795332 | orchestrator | Saturday 28 March 2026 00:42:56 +0000 (0:00:00.151) 0:00:19.764 ******** 2026-03-28 00:42:59.795339 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:59.795345 | orchestrator | 2026-03-28 00:42:59.795352 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-28 00:42:59.795358 | orchestrator | Saturday 28 March 2026 00:42:56 +0000 (0:00:00.121) 0:00:19.885 ******** 2026-03-28 00:42:59.795365 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:59.795371 | orchestrator | 2026-03-28 00:42:59.795378 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-28 00:42:59.795384 | orchestrator | Saturday 28 March 2026 00:42:56 +0000 (0:00:00.119) 0:00:20.005 ******** 2026-03-28 00:42:59.795391 | orchestrator | ok: [testbed-node-3] => { 2026-03-28 00:42:59.795397 | orchestrator |  "vgs_report": { 2026-03-28 00:42:59.795404 | orchestrator |  "vg": [] 2026-03-28 00:42:59.795411 | orchestrator |  } 2026-03-28 00:42:59.795417 | orchestrator | } 2026-03-28 00:42:59.795424 | orchestrator | 2026-03-28 00:42:59.795430 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-28 00:42:59.795437 | orchestrator | Saturday 28 March 2026 00:42:56 +0000 (0:00:00.137) 0:00:20.143 ******** 2026-03-28 00:42:59.795443 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:59.795450 | orchestrator | 2026-03-28 00:42:59.795456 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-28 00:42:59.795463 | orchestrator | Saturday 28 March 2026 00:42:56 +0000 (0:00:00.145) 0:00:20.288 ******** 2026-03-28 00:42:59.795470 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:59.795476 | orchestrator | 2026-03-28 00:42:59.795483 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-28 00:42:59.795491 | orchestrator | Saturday 28 March 2026 00:42:56 +0000 (0:00:00.133) 0:00:20.422 ******** 2026-03-28 00:42:59.795498 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:59.795506 | orchestrator | 2026-03-28 00:42:59.795513 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-28 00:42:59.795521 | orchestrator | Saturday 28 March 2026 00:42:56 +0000 (0:00:00.146) 0:00:20.569 ******** 2026-03-28 00:42:59.795528 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:59.795535 | orchestrator | 2026-03-28 00:42:59.795543 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-28 00:42:59.795569 | orchestrator | Saturday 28 March 2026 00:42:57 +0000 (0:00:00.343) 0:00:20.912 ******** 2026-03-28 00:42:59.795577 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:59.795584 | orchestrator | 2026-03-28 00:42:59.795592 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-28 00:42:59.795599 | orchestrator | Saturday 28 March 2026 00:42:57 +0000 (0:00:00.135) 0:00:21.048 ******** 2026-03-28 00:42:59.795606 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:59.795614 | orchestrator | 2026-03-28 00:42:59.795622 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-28 00:42:59.795629 | orchestrator | Saturday 28 March 2026 00:42:57 +0000 (0:00:00.135) 0:00:21.184 ******** 2026-03-28 00:42:59.795637 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:59.795650 | orchestrator | 2026-03-28 00:42:59.795658 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-28 00:42:59.795665 | orchestrator | Saturday 28 March 2026 00:42:57 +0000 (0:00:00.134) 0:00:21.318 ******** 2026-03-28 00:42:59.795686 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:59.795694 | orchestrator | 2026-03-28 00:42:59.795702 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-28 00:42:59.795709 | orchestrator | Saturday 28 March 2026 00:42:57 +0000 (0:00:00.128) 0:00:21.446 ******** 2026-03-28 00:42:59.795717 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:59.795724 | orchestrator | 2026-03-28 00:42:59.795731 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-28 00:42:59.795739 | orchestrator | Saturday 28 March 2026 00:42:57 +0000 (0:00:00.167) 0:00:21.614 ******** 2026-03-28 00:42:59.795746 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:59.795753 | orchestrator | 2026-03-28 00:42:59.795760 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-28 00:42:59.795768 | orchestrator | Saturday 28 March 2026 00:42:58 +0000 (0:00:00.153) 0:00:21.767 ******** 2026-03-28 00:42:59.795775 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:59.795783 | orchestrator | 2026-03-28 00:42:59.795790 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-28 00:42:59.795798 | orchestrator | Saturday 28 March 2026 00:42:58 +0000 (0:00:00.132) 0:00:21.900 ******** 2026-03-28 00:42:59.795805 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:59.795812 | orchestrator | 2026-03-28 00:42:59.795819 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-28 00:42:59.795827 | orchestrator | Saturday 28 March 2026 00:42:58 +0000 (0:00:00.137) 0:00:22.037 ******** 2026-03-28 00:42:59.795835 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:59.795842 | orchestrator | 2026-03-28 00:42:59.795850 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-28 00:42:59.795856 | orchestrator | Saturday 28 March 2026 00:42:58 +0000 (0:00:00.127) 0:00:22.165 ******** 2026-03-28 00:42:59.795862 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:59.795869 | orchestrator | 2026-03-28 00:42:59.795880 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-28 00:42:59.795886 | orchestrator | Saturday 28 March 2026 00:42:58 +0000 (0:00:00.136) 0:00:22.301 ******** 2026-03-28 00:42:59.795894 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3eb28a65-49e9-527a-93b6-39f945444b2a', 'data_vg': 'ceph-3eb28a65-49e9-527a-93b6-39f945444b2a'})  2026-03-28 00:42:59.795902 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8c246942-827f-54a7-8a08-735105fd2fd0', 'data_vg': 'ceph-8c246942-827f-54a7-8a08-735105fd2fd0'})  2026-03-28 00:42:59.795909 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:59.795915 | orchestrator | 2026-03-28 00:42:59.795922 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-28 00:42:59.795928 | orchestrator | Saturday 28 March 2026 00:42:58 +0000 (0:00:00.147) 0:00:22.449 ******** 2026-03-28 00:42:59.795935 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3eb28a65-49e9-527a-93b6-39f945444b2a', 'data_vg': 'ceph-3eb28a65-49e9-527a-93b6-39f945444b2a'})  2026-03-28 00:42:59.795942 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8c246942-827f-54a7-8a08-735105fd2fd0', 'data_vg': 'ceph-8c246942-827f-54a7-8a08-735105fd2fd0'})  2026-03-28 00:42:59.795948 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:59.795954 | orchestrator | 2026-03-28 00:42:59.795961 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-28 00:42:59.795967 | orchestrator | Saturday 28 March 2026 00:42:59 +0000 (0:00:00.352) 0:00:22.801 ******** 2026-03-28 00:42:59.795974 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3eb28a65-49e9-527a-93b6-39f945444b2a', 'data_vg': 'ceph-3eb28a65-49e9-527a-93b6-39f945444b2a'})  2026-03-28 00:42:59.795980 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8c246942-827f-54a7-8a08-735105fd2fd0', 'data_vg': 'ceph-8c246942-827f-54a7-8a08-735105fd2fd0'})  2026-03-28 00:42:59.795993 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:59.795999 | orchestrator | 2026-03-28 00:42:59.796006 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-28 00:42:59.796012 | orchestrator | Saturday 28 March 2026 00:42:59 +0000 (0:00:00.166) 0:00:22.967 ******** 2026-03-28 00:42:59.796019 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3eb28a65-49e9-527a-93b6-39f945444b2a', 'data_vg': 'ceph-3eb28a65-49e9-527a-93b6-39f945444b2a'})  2026-03-28 00:42:59.796025 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8c246942-827f-54a7-8a08-735105fd2fd0', 'data_vg': 'ceph-8c246942-827f-54a7-8a08-735105fd2fd0'})  2026-03-28 00:42:59.796032 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:59.796038 | orchestrator | 2026-03-28 00:42:59.796044 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-28 00:42:59.796051 | orchestrator | Saturday 28 March 2026 00:42:59 +0000 (0:00:00.178) 0:00:23.145 ******** 2026-03-28 00:42:59.796057 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3eb28a65-49e9-527a-93b6-39f945444b2a', 'data_vg': 'ceph-3eb28a65-49e9-527a-93b6-39f945444b2a'})  2026-03-28 00:42:59.796064 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8c246942-827f-54a7-8a08-735105fd2fd0', 'data_vg': 'ceph-8c246942-827f-54a7-8a08-735105fd2fd0'})  2026-03-28 00:42:59.796070 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:42:59.796077 | orchestrator | 2026-03-28 00:42:59.796083 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-28 00:42:59.796090 | orchestrator | Saturday 28 March 2026 00:42:59 +0000 (0:00:00.178) 0:00:23.324 ******** 2026-03-28 00:42:59.796100 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3eb28a65-49e9-527a-93b6-39f945444b2a', 'data_vg': 'ceph-3eb28a65-49e9-527a-93b6-39f945444b2a'})  2026-03-28 00:43:04.958711 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8c246942-827f-54a7-8a08-735105fd2fd0', 'data_vg': 'ceph-8c246942-827f-54a7-8a08-735105fd2fd0'})  2026-03-28 00:43:04.959392 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:43:04.959425 | orchestrator | 2026-03-28 00:43:04.959439 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-28 00:43:04.959451 | orchestrator | Saturday 28 March 2026 00:42:59 +0000 (0:00:00.179) 0:00:23.504 ******** 2026-03-28 00:43:04.959463 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3eb28a65-49e9-527a-93b6-39f945444b2a', 'data_vg': 'ceph-3eb28a65-49e9-527a-93b6-39f945444b2a'})  2026-03-28 00:43:04.959475 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8c246942-827f-54a7-8a08-735105fd2fd0', 'data_vg': 'ceph-8c246942-827f-54a7-8a08-735105fd2fd0'})  2026-03-28 00:43:04.959486 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:43:04.959498 | orchestrator | 2026-03-28 00:43:04.959509 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-28 00:43:04.959520 | orchestrator | Saturday 28 March 2026 00:43:00 +0000 (0:00:00.156) 0:00:23.660 ******** 2026-03-28 00:43:04.959530 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3eb28a65-49e9-527a-93b6-39f945444b2a', 'data_vg': 'ceph-3eb28a65-49e9-527a-93b6-39f945444b2a'})  2026-03-28 00:43:04.959541 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8c246942-827f-54a7-8a08-735105fd2fd0', 'data_vg': 'ceph-8c246942-827f-54a7-8a08-735105fd2fd0'})  2026-03-28 00:43:04.959584 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:43:04.959596 | orchestrator | 2026-03-28 00:43:04.959607 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-28 00:43:04.959618 | orchestrator | Saturday 28 March 2026 00:43:00 +0000 (0:00:00.168) 0:00:23.829 ******** 2026-03-28 00:43:04.959628 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:43:04.959638 | orchestrator | 2026-03-28 00:43:04.959668 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-28 00:43:04.959679 | orchestrator | Saturday 28 March 2026 00:43:00 +0000 (0:00:00.552) 0:00:24.382 ******** 2026-03-28 00:43:04.959690 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:43:04.959700 | orchestrator | 2026-03-28 00:43:04.959710 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-28 00:43:04.959733 | orchestrator | Saturday 28 March 2026 00:43:01 +0000 (0:00:00.541) 0:00:24.923 ******** 2026-03-28 00:43:04.959743 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:43:04.959753 | orchestrator | 2026-03-28 00:43:04.959763 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-28 00:43:04.959774 | orchestrator | Saturday 28 March 2026 00:43:01 +0000 (0:00:00.168) 0:00:25.092 ******** 2026-03-28 00:43:04.959784 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-3eb28a65-49e9-527a-93b6-39f945444b2a', 'vg_name': 'ceph-3eb28a65-49e9-527a-93b6-39f945444b2a'}) 2026-03-28 00:43:04.959796 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-8c246942-827f-54a7-8a08-735105fd2fd0', 'vg_name': 'ceph-8c246942-827f-54a7-8a08-735105fd2fd0'}) 2026-03-28 00:43:04.959806 | orchestrator | 2026-03-28 00:43:04.959817 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-28 00:43:04.959827 | orchestrator | Saturday 28 March 2026 00:43:01 +0000 (0:00:00.174) 0:00:25.267 ******** 2026-03-28 00:43:04.959838 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3eb28a65-49e9-527a-93b6-39f945444b2a', 'data_vg': 'ceph-3eb28a65-49e9-527a-93b6-39f945444b2a'})  2026-03-28 00:43:04.959848 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8c246942-827f-54a7-8a08-735105fd2fd0', 'data_vg': 'ceph-8c246942-827f-54a7-8a08-735105fd2fd0'})  2026-03-28 00:43:04.959858 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:43:04.959868 | orchestrator | 2026-03-28 00:43:04.959878 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-28 00:43:04.959889 | orchestrator | Saturday 28 March 2026 00:43:01 +0000 (0:00:00.195) 0:00:25.462 ******** 2026-03-28 00:43:04.959899 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3eb28a65-49e9-527a-93b6-39f945444b2a', 'data_vg': 'ceph-3eb28a65-49e9-527a-93b6-39f945444b2a'})  2026-03-28 00:43:04.959910 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8c246942-827f-54a7-8a08-735105fd2fd0', 'data_vg': 'ceph-8c246942-827f-54a7-8a08-735105fd2fd0'})  2026-03-28 00:43:04.959921 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:43:04.959931 | orchestrator | 2026-03-28 00:43:04.959941 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-28 00:43:04.959952 | orchestrator | Saturday 28 March 2026 00:43:02 +0000 (0:00:00.342) 0:00:25.804 ******** 2026-03-28 00:43:04.959962 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3eb28a65-49e9-527a-93b6-39f945444b2a', 'data_vg': 'ceph-3eb28a65-49e9-527a-93b6-39f945444b2a'})  2026-03-28 00:43:04.959973 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8c246942-827f-54a7-8a08-735105fd2fd0', 'data_vg': 'ceph-8c246942-827f-54a7-8a08-735105fd2fd0'})  2026-03-28 00:43:04.959983 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:43:04.959993 | orchestrator | 2026-03-28 00:43:04.960003 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-28 00:43:04.960013 | orchestrator | Saturday 28 March 2026 00:43:02 +0000 (0:00:00.125) 0:00:25.930 ******** 2026-03-28 00:43:04.960041 | orchestrator | ok: [testbed-node-3] => { 2026-03-28 00:43:04.960052 | orchestrator |  "lvm_report": { 2026-03-28 00:43:04.960063 | orchestrator |  "lv": [ 2026-03-28 00:43:04.960073 | orchestrator |  { 2026-03-28 00:43:04.960083 | orchestrator |  "lv_name": "osd-block-3eb28a65-49e9-527a-93b6-39f945444b2a", 2026-03-28 00:43:04.960105 | orchestrator |  "vg_name": "ceph-3eb28a65-49e9-527a-93b6-39f945444b2a" 2026-03-28 00:43:04.960115 | orchestrator |  }, 2026-03-28 00:43:04.960133 | orchestrator |  { 2026-03-28 00:43:04.960143 | orchestrator |  "lv_name": "osd-block-8c246942-827f-54a7-8a08-735105fd2fd0", 2026-03-28 00:43:04.960152 | orchestrator |  "vg_name": "ceph-8c246942-827f-54a7-8a08-735105fd2fd0" 2026-03-28 00:43:04.960163 | orchestrator |  } 2026-03-28 00:43:04.960172 | orchestrator |  ], 2026-03-28 00:43:04.960182 | orchestrator |  "pv": [ 2026-03-28 00:43:04.960191 | orchestrator |  { 2026-03-28 00:43:04.960201 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-28 00:43:04.960211 | orchestrator |  "vg_name": "ceph-3eb28a65-49e9-527a-93b6-39f945444b2a" 2026-03-28 00:43:04.960220 | orchestrator |  }, 2026-03-28 00:43:04.960229 | orchestrator |  { 2026-03-28 00:43:04.960238 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-28 00:43:04.960248 | orchestrator |  "vg_name": "ceph-8c246942-827f-54a7-8a08-735105fd2fd0" 2026-03-28 00:43:04.960257 | orchestrator |  } 2026-03-28 00:43:04.960267 | orchestrator |  ] 2026-03-28 00:43:04.960276 | orchestrator |  } 2026-03-28 00:43:04.960286 | orchestrator | } 2026-03-28 00:43:04.960296 | orchestrator | 2026-03-28 00:43:04.960305 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-28 00:43:04.960314 | orchestrator | 2026-03-28 00:43:04.960324 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-28 00:43:04.960338 | orchestrator | Saturday 28 March 2026 00:43:02 +0000 (0:00:00.254) 0:00:26.184 ******** 2026-03-28 00:43:04.960348 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-28 00:43:04.960358 | orchestrator | 2026-03-28 00:43:04.960367 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-28 00:43:04.960377 | orchestrator | Saturday 28 March 2026 00:43:02 +0000 (0:00:00.239) 0:00:26.424 ******** 2026-03-28 00:43:04.960386 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:43:04.960396 | orchestrator | 2026-03-28 00:43:04.960405 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:04.960414 | orchestrator | Saturday 28 March 2026 00:43:03 +0000 (0:00:00.221) 0:00:26.645 ******** 2026-03-28 00:43:04.960424 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-28 00:43:04.960433 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-28 00:43:04.960443 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-28 00:43:04.960452 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-28 00:43:04.960462 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-28 00:43:04.960471 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-28 00:43:04.960481 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-28 00:43:04.960490 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-28 00:43:04.960499 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-28 00:43:04.960508 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-28 00:43:04.960518 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-28 00:43:04.960528 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-28 00:43:04.960538 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-28 00:43:04.960570 | orchestrator | 2026-03-28 00:43:04.960581 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:04.960591 | orchestrator | Saturday 28 March 2026 00:43:03 +0000 (0:00:00.372) 0:00:27.018 ******** 2026-03-28 00:43:04.960601 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:04.960618 | orchestrator | 2026-03-28 00:43:04.960627 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:04.960637 | orchestrator | Saturday 28 March 2026 00:43:03 +0000 (0:00:00.184) 0:00:27.203 ******** 2026-03-28 00:43:04.960646 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:04.960655 | orchestrator | 2026-03-28 00:43:04.960664 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:04.960674 | orchestrator | Saturday 28 March 2026 00:43:03 +0000 (0:00:00.173) 0:00:27.376 ******** 2026-03-28 00:43:04.960683 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:04.960692 | orchestrator | 2026-03-28 00:43:04.960701 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:04.960711 | orchestrator | Saturday 28 March 2026 00:43:03 +0000 (0:00:00.210) 0:00:27.586 ******** 2026-03-28 00:43:04.960720 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:04.960730 | orchestrator | 2026-03-28 00:43:04.960739 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:04.960748 | orchestrator | Saturday 28 March 2026 00:43:04 +0000 (0:00:00.607) 0:00:28.194 ******** 2026-03-28 00:43:04.960758 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:04.960767 | orchestrator | 2026-03-28 00:43:04.960776 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:04.960831 | orchestrator | Saturday 28 March 2026 00:43:04 +0000 (0:00:00.188) 0:00:28.383 ******** 2026-03-28 00:43:04.960843 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:04.960853 | orchestrator | 2026-03-28 00:43:04.960872 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:15.576754 | orchestrator | Saturday 28 March 2026 00:43:04 +0000 (0:00:00.187) 0:00:28.570 ******** 2026-03-28 00:43:15.576879 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:15.576895 | orchestrator | 2026-03-28 00:43:15.576906 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:15.576917 | orchestrator | Saturday 28 March 2026 00:43:05 +0000 (0:00:00.188) 0:00:28.758 ******** 2026-03-28 00:43:15.576927 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:15.576936 | orchestrator | 2026-03-28 00:43:15.576946 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:15.576956 | orchestrator | Saturday 28 March 2026 00:43:05 +0000 (0:00:00.201) 0:00:28.960 ******** 2026-03-28 00:43:15.576967 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_eb8cdf5a-61ca-4829-8f5a-ada391b02d40) 2026-03-28 00:43:15.576978 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_eb8cdf5a-61ca-4829-8f5a-ada391b02d40) 2026-03-28 00:43:15.576988 | orchestrator | 2026-03-28 00:43:15.576997 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:15.577007 | orchestrator | Saturday 28 March 2026 00:43:05 +0000 (0:00:00.389) 0:00:29.350 ******** 2026-03-28 00:43:15.577017 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_0a0aea56-4050-4691-823a-d862fa48a59f) 2026-03-28 00:43:15.577027 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_0a0aea56-4050-4691-823a-d862fa48a59f) 2026-03-28 00:43:15.577036 | orchestrator | 2026-03-28 00:43:15.577046 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:15.577056 | orchestrator | Saturday 28 March 2026 00:43:06 +0000 (0:00:00.438) 0:00:29.788 ******** 2026-03-28 00:43:15.577065 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_c165f4e4-c145-4cd5-8a4b-fe75c460abfb) 2026-03-28 00:43:15.577075 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_c165f4e4-c145-4cd5-8a4b-fe75c460abfb) 2026-03-28 00:43:15.577085 | orchestrator | 2026-03-28 00:43:15.577094 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:15.577104 | orchestrator | Saturday 28 March 2026 00:43:06 +0000 (0:00:00.432) 0:00:30.221 ******** 2026-03-28 00:43:15.577114 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_edfefcfb-f0d2-43d0-b5b0-353b223cd811) 2026-03-28 00:43:15.577148 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_edfefcfb-f0d2-43d0-b5b0-353b223cd811) 2026-03-28 00:43:15.577158 | orchestrator | 2026-03-28 00:43:15.577168 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:15.577177 | orchestrator | Saturday 28 March 2026 00:43:07 +0000 (0:00:00.420) 0:00:30.642 ******** 2026-03-28 00:43:15.577187 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-28 00:43:15.577196 | orchestrator | 2026-03-28 00:43:15.577206 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:15.577215 | orchestrator | Saturday 28 March 2026 00:43:07 +0000 (0:00:00.382) 0:00:31.025 ******** 2026-03-28 00:43:15.577224 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-28 00:43:15.577234 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-28 00:43:15.577244 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-28 00:43:15.577253 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-28 00:43:15.577264 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-28 00:43:15.577276 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-28 00:43:15.577286 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-28 00:43:15.577298 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-28 00:43:15.577309 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-28 00:43:15.577319 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-28 00:43:15.577331 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-28 00:43:15.577341 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-28 00:43:15.577352 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-28 00:43:15.577363 | orchestrator | 2026-03-28 00:43:15.577373 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:15.577385 | orchestrator | Saturday 28 March 2026 00:43:08 +0000 (0:00:00.647) 0:00:31.673 ******** 2026-03-28 00:43:15.577396 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:15.577407 | orchestrator | 2026-03-28 00:43:15.577418 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:15.577429 | orchestrator | Saturday 28 March 2026 00:43:08 +0000 (0:00:00.209) 0:00:31.883 ******** 2026-03-28 00:43:15.577440 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:15.577451 | orchestrator | 2026-03-28 00:43:15.577461 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:15.577473 | orchestrator | Saturday 28 March 2026 00:43:08 +0000 (0:00:00.218) 0:00:32.102 ******** 2026-03-28 00:43:15.577484 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:15.577495 | orchestrator | 2026-03-28 00:43:15.577523 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:15.577535 | orchestrator | Saturday 28 March 2026 00:43:08 +0000 (0:00:00.205) 0:00:32.307 ******** 2026-03-28 00:43:15.577564 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:15.577574 | orchestrator | 2026-03-28 00:43:15.577584 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:15.577593 | orchestrator | Saturday 28 March 2026 00:43:08 +0000 (0:00:00.198) 0:00:32.505 ******** 2026-03-28 00:43:15.577603 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:15.577612 | orchestrator | 2026-03-28 00:43:15.577621 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:15.577638 | orchestrator | Saturday 28 March 2026 00:43:09 +0000 (0:00:00.199) 0:00:32.705 ******** 2026-03-28 00:43:15.577648 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:15.577657 | orchestrator | 2026-03-28 00:43:15.577667 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:15.577677 | orchestrator | Saturday 28 March 2026 00:43:09 +0000 (0:00:00.200) 0:00:32.905 ******** 2026-03-28 00:43:15.577687 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:15.577696 | orchestrator | 2026-03-28 00:43:15.577705 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:15.577715 | orchestrator | Saturday 28 March 2026 00:43:09 +0000 (0:00:00.197) 0:00:33.102 ******** 2026-03-28 00:43:15.577741 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:15.577751 | orchestrator | 2026-03-28 00:43:15.577761 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:15.577775 | orchestrator | Saturday 28 March 2026 00:43:09 +0000 (0:00:00.227) 0:00:33.330 ******** 2026-03-28 00:43:15.577785 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-28 00:43:15.577794 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-28 00:43:15.577804 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-28 00:43:15.577814 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-28 00:43:15.577823 | orchestrator | 2026-03-28 00:43:15.577833 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:15.577842 | orchestrator | Saturday 28 March 2026 00:43:10 +0000 (0:00:00.959) 0:00:34.289 ******** 2026-03-28 00:43:15.577852 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:15.577861 | orchestrator | 2026-03-28 00:43:15.577870 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:15.577880 | orchestrator | Saturday 28 March 2026 00:43:10 +0000 (0:00:00.202) 0:00:34.492 ******** 2026-03-28 00:43:15.577889 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:15.577899 | orchestrator | 2026-03-28 00:43:15.577908 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:15.577918 | orchestrator | Saturday 28 March 2026 00:43:11 +0000 (0:00:00.213) 0:00:34.706 ******** 2026-03-28 00:43:15.577928 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:15.577937 | orchestrator | 2026-03-28 00:43:15.577947 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:15.577956 | orchestrator | Saturday 28 March 2026 00:43:11 +0000 (0:00:00.690) 0:00:35.396 ******** 2026-03-28 00:43:15.577965 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:15.577975 | orchestrator | 2026-03-28 00:43:15.577984 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-28 00:43:15.578124 | orchestrator | Saturday 28 March 2026 00:43:11 +0000 (0:00:00.219) 0:00:35.616 ******** 2026-03-28 00:43:15.578136 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:15.578146 | orchestrator | 2026-03-28 00:43:15.578156 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-28 00:43:15.578165 | orchestrator | Saturday 28 March 2026 00:43:12 +0000 (0:00:00.154) 0:00:35.770 ******** 2026-03-28 00:43:15.578174 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '95774a3e-10f2-5c5c-866d-eaa2f6123896'}}) 2026-03-28 00:43:15.578184 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6126976c-050b-5515-8c81-fb3ee245975b'}}) 2026-03-28 00:43:15.578194 | orchestrator | 2026-03-28 00:43:15.578203 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-28 00:43:15.578213 | orchestrator | Saturday 28 March 2026 00:43:12 +0000 (0:00:00.199) 0:00:35.970 ******** 2026-03-28 00:43:15.578224 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-95774a3e-10f2-5c5c-866d-eaa2f6123896', 'data_vg': 'ceph-95774a3e-10f2-5c5c-866d-eaa2f6123896'}) 2026-03-28 00:43:15.578235 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-6126976c-050b-5515-8c81-fb3ee245975b', 'data_vg': 'ceph-6126976c-050b-5515-8c81-fb3ee245975b'}) 2026-03-28 00:43:15.578252 | orchestrator | 2026-03-28 00:43:15.578262 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-28 00:43:15.578272 | orchestrator | Saturday 28 March 2026 00:43:14 +0000 (0:00:01.827) 0:00:37.797 ******** 2026-03-28 00:43:15.578281 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-95774a3e-10f2-5c5c-866d-eaa2f6123896', 'data_vg': 'ceph-95774a3e-10f2-5c5c-866d-eaa2f6123896'})  2026-03-28 00:43:15.578292 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6126976c-050b-5515-8c81-fb3ee245975b', 'data_vg': 'ceph-6126976c-050b-5515-8c81-fb3ee245975b'})  2026-03-28 00:43:15.578301 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:15.578311 | orchestrator | 2026-03-28 00:43:15.578320 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-28 00:43:15.578330 | orchestrator | Saturday 28 March 2026 00:43:14 +0000 (0:00:00.171) 0:00:37.969 ******** 2026-03-28 00:43:15.578339 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-95774a3e-10f2-5c5c-866d-eaa2f6123896', 'data_vg': 'ceph-95774a3e-10f2-5c5c-866d-eaa2f6123896'}) 2026-03-28 00:43:15.578359 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-6126976c-050b-5515-8c81-fb3ee245975b', 'data_vg': 'ceph-6126976c-050b-5515-8c81-fb3ee245975b'}) 2026-03-28 00:43:21.344120 | orchestrator | 2026-03-28 00:43:21.344268 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-28 00:43:21.344301 | orchestrator | Saturday 28 March 2026 00:43:15 +0000 (0:00:01.298) 0:00:39.267 ******** 2026-03-28 00:43:21.344320 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-95774a3e-10f2-5c5c-866d-eaa2f6123896', 'data_vg': 'ceph-95774a3e-10f2-5c5c-866d-eaa2f6123896'})  2026-03-28 00:43:21.344340 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6126976c-050b-5515-8c81-fb3ee245975b', 'data_vg': 'ceph-6126976c-050b-5515-8c81-fb3ee245975b'})  2026-03-28 00:43:21.344357 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:21.344376 | orchestrator | 2026-03-28 00:43:21.344394 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-28 00:43:21.344412 | orchestrator | Saturday 28 March 2026 00:43:15 +0000 (0:00:00.158) 0:00:39.426 ******** 2026-03-28 00:43:21.344429 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:21.344449 | orchestrator | 2026-03-28 00:43:21.344468 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-28 00:43:21.344486 | orchestrator | Saturday 28 March 2026 00:43:15 +0000 (0:00:00.151) 0:00:39.577 ******** 2026-03-28 00:43:21.344520 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-95774a3e-10f2-5c5c-866d-eaa2f6123896', 'data_vg': 'ceph-95774a3e-10f2-5c5c-866d-eaa2f6123896'})  2026-03-28 00:43:21.344533 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6126976c-050b-5515-8c81-fb3ee245975b', 'data_vg': 'ceph-6126976c-050b-5515-8c81-fb3ee245975b'})  2026-03-28 00:43:21.344602 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:21.344614 | orchestrator | 2026-03-28 00:43:21.344625 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-28 00:43:21.344636 | orchestrator | Saturday 28 March 2026 00:43:16 +0000 (0:00:00.176) 0:00:39.754 ******** 2026-03-28 00:43:21.344647 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:21.344658 | orchestrator | 2026-03-28 00:43:21.344668 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-28 00:43:21.344679 | orchestrator | Saturday 28 March 2026 00:43:16 +0000 (0:00:00.144) 0:00:39.899 ******** 2026-03-28 00:43:21.344690 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-95774a3e-10f2-5c5c-866d-eaa2f6123896', 'data_vg': 'ceph-95774a3e-10f2-5c5c-866d-eaa2f6123896'})  2026-03-28 00:43:21.344701 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6126976c-050b-5515-8c81-fb3ee245975b', 'data_vg': 'ceph-6126976c-050b-5515-8c81-fb3ee245975b'})  2026-03-28 00:43:21.344740 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:21.344752 | orchestrator | 2026-03-28 00:43:21.344762 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-28 00:43:21.344773 | orchestrator | Saturday 28 March 2026 00:43:16 +0000 (0:00:00.154) 0:00:40.053 ******** 2026-03-28 00:43:21.344783 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:21.344795 | orchestrator | 2026-03-28 00:43:21.344806 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-28 00:43:21.344817 | orchestrator | Saturday 28 March 2026 00:43:16 +0000 (0:00:00.354) 0:00:40.408 ******** 2026-03-28 00:43:21.344827 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-95774a3e-10f2-5c5c-866d-eaa2f6123896', 'data_vg': 'ceph-95774a3e-10f2-5c5c-866d-eaa2f6123896'})  2026-03-28 00:43:21.344838 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6126976c-050b-5515-8c81-fb3ee245975b', 'data_vg': 'ceph-6126976c-050b-5515-8c81-fb3ee245975b'})  2026-03-28 00:43:21.344849 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:21.344859 | orchestrator | 2026-03-28 00:43:21.344870 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-28 00:43:21.344880 | orchestrator | Saturday 28 March 2026 00:43:16 +0000 (0:00:00.161) 0:00:40.570 ******** 2026-03-28 00:43:21.344891 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:43:21.344903 | orchestrator | 2026-03-28 00:43:21.344913 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-28 00:43:21.344924 | orchestrator | Saturday 28 March 2026 00:43:17 +0000 (0:00:00.135) 0:00:40.706 ******** 2026-03-28 00:43:21.344935 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-95774a3e-10f2-5c5c-866d-eaa2f6123896', 'data_vg': 'ceph-95774a3e-10f2-5c5c-866d-eaa2f6123896'})  2026-03-28 00:43:21.344945 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6126976c-050b-5515-8c81-fb3ee245975b', 'data_vg': 'ceph-6126976c-050b-5515-8c81-fb3ee245975b'})  2026-03-28 00:43:21.344956 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:21.344966 | orchestrator | 2026-03-28 00:43:21.344977 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-28 00:43:21.344988 | orchestrator | Saturday 28 March 2026 00:43:17 +0000 (0:00:00.167) 0:00:40.873 ******** 2026-03-28 00:43:21.344998 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-95774a3e-10f2-5c5c-866d-eaa2f6123896', 'data_vg': 'ceph-95774a3e-10f2-5c5c-866d-eaa2f6123896'})  2026-03-28 00:43:21.345009 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6126976c-050b-5515-8c81-fb3ee245975b', 'data_vg': 'ceph-6126976c-050b-5515-8c81-fb3ee245975b'})  2026-03-28 00:43:21.345019 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:21.345030 | orchestrator | 2026-03-28 00:43:21.345040 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-28 00:43:21.345073 | orchestrator | Saturday 28 March 2026 00:43:17 +0000 (0:00:00.159) 0:00:41.033 ******** 2026-03-28 00:43:21.345084 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-95774a3e-10f2-5c5c-866d-eaa2f6123896', 'data_vg': 'ceph-95774a3e-10f2-5c5c-866d-eaa2f6123896'})  2026-03-28 00:43:21.345095 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6126976c-050b-5515-8c81-fb3ee245975b', 'data_vg': 'ceph-6126976c-050b-5515-8c81-fb3ee245975b'})  2026-03-28 00:43:21.345106 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:21.345116 | orchestrator | 2026-03-28 00:43:21.345127 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-28 00:43:21.345137 | orchestrator | Saturday 28 March 2026 00:43:17 +0000 (0:00:00.170) 0:00:41.204 ******** 2026-03-28 00:43:21.345148 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:21.345158 | orchestrator | 2026-03-28 00:43:21.345169 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-28 00:43:21.345179 | orchestrator | Saturday 28 March 2026 00:43:17 +0000 (0:00:00.133) 0:00:41.338 ******** 2026-03-28 00:43:21.345201 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:21.345219 | orchestrator | 2026-03-28 00:43:21.345237 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-28 00:43:21.345261 | orchestrator | Saturday 28 March 2026 00:43:17 +0000 (0:00:00.141) 0:00:41.479 ******** 2026-03-28 00:43:21.345280 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:21.345297 | orchestrator | 2026-03-28 00:43:21.345314 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-28 00:43:21.345331 | orchestrator | Saturday 28 March 2026 00:43:18 +0000 (0:00:00.146) 0:00:41.626 ******** 2026-03-28 00:43:21.345348 | orchestrator | ok: [testbed-node-4] => { 2026-03-28 00:43:21.345367 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-28 00:43:21.345385 | orchestrator | } 2026-03-28 00:43:21.345404 | orchestrator | 2026-03-28 00:43:21.345415 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-28 00:43:21.345426 | orchestrator | Saturday 28 March 2026 00:43:18 +0000 (0:00:00.147) 0:00:41.773 ******** 2026-03-28 00:43:21.345436 | orchestrator | ok: [testbed-node-4] => { 2026-03-28 00:43:21.345447 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-28 00:43:21.345460 | orchestrator | } 2026-03-28 00:43:21.345477 | orchestrator | 2026-03-28 00:43:21.345495 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-28 00:43:21.345513 | orchestrator | Saturday 28 March 2026 00:43:18 +0000 (0:00:00.154) 0:00:41.927 ******** 2026-03-28 00:43:21.345531 | orchestrator | ok: [testbed-node-4] => { 2026-03-28 00:43:21.345574 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-28 00:43:21.345594 | orchestrator | } 2026-03-28 00:43:21.345613 | orchestrator | 2026-03-28 00:43:21.345630 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-28 00:43:21.345646 | orchestrator | Saturday 28 March 2026 00:43:18 +0000 (0:00:00.155) 0:00:42.082 ******** 2026-03-28 00:43:21.345657 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:43:21.345667 | orchestrator | 2026-03-28 00:43:21.345678 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-28 00:43:21.345688 | orchestrator | Saturday 28 March 2026 00:43:19 +0000 (0:00:00.745) 0:00:42.828 ******** 2026-03-28 00:43:21.345699 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:43:21.345709 | orchestrator | 2026-03-28 00:43:21.345720 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-28 00:43:21.345730 | orchestrator | Saturday 28 March 2026 00:43:19 +0000 (0:00:00.525) 0:00:43.353 ******** 2026-03-28 00:43:21.345741 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:43:21.345751 | orchestrator | 2026-03-28 00:43:21.345762 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-28 00:43:21.345772 | orchestrator | Saturday 28 March 2026 00:43:20 +0000 (0:00:00.533) 0:00:43.887 ******** 2026-03-28 00:43:21.345783 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:43:21.345793 | orchestrator | 2026-03-28 00:43:21.345804 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-28 00:43:21.345814 | orchestrator | Saturday 28 March 2026 00:43:20 +0000 (0:00:00.142) 0:00:44.029 ******** 2026-03-28 00:43:21.345825 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:21.345835 | orchestrator | 2026-03-28 00:43:21.345846 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-28 00:43:21.345856 | orchestrator | Saturday 28 March 2026 00:43:20 +0000 (0:00:00.110) 0:00:44.139 ******** 2026-03-28 00:43:21.345867 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:21.345877 | orchestrator | 2026-03-28 00:43:21.345888 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-28 00:43:21.345899 | orchestrator | Saturday 28 March 2026 00:43:20 +0000 (0:00:00.097) 0:00:44.237 ******** 2026-03-28 00:43:21.345909 | orchestrator | ok: [testbed-node-4] => { 2026-03-28 00:43:21.345920 | orchestrator |  "vgs_report": { 2026-03-28 00:43:21.345930 | orchestrator |  "vg": [] 2026-03-28 00:43:21.345941 | orchestrator |  } 2026-03-28 00:43:21.345952 | orchestrator | } 2026-03-28 00:43:21.345973 | orchestrator | 2026-03-28 00:43:21.345984 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-28 00:43:21.345994 | orchestrator | Saturday 28 March 2026 00:43:20 +0000 (0:00:00.147) 0:00:44.384 ******** 2026-03-28 00:43:21.346005 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:21.346077 | orchestrator | 2026-03-28 00:43:21.346091 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-28 00:43:21.346102 | orchestrator | Saturday 28 March 2026 00:43:20 +0000 (0:00:00.137) 0:00:44.521 ******** 2026-03-28 00:43:21.346113 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:21.346123 | orchestrator | 2026-03-28 00:43:21.346135 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-28 00:43:21.346145 | orchestrator | Saturday 28 March 2026 00:43:21 +0000 (0:00:00.151) 0:00:44.672 ******** 2026-03-28 00:43:21.346156 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:21.346167 | orchestrator | 2026-03-28 00:43:21.346177 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-28 00:43:21.346188 | orchestrator | Saturday 28 March 2026 00:43:21 +0000 (0:00:00.133) 0:00:44.806 ******** 2026-03-28 00:43:21.346199 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:21.346210 | orchestrator | 2026-03-28 00:43:21.346233 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-28 00:43:26.356968 | orchestrator | Saturday 28 March 2026 00:43:21 +0000 (0:00:00.149) 0:00:44.955 ******** 2026-03-28 00:43:26.357068 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:26.357083 | orchestrator | 2026-03-28 00:43:26.357095 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-28 00:43:26.357105 | orchestrator | Saturday 28 March 2026 00:43:21 +0000 (0:00:00.140) 0:00:45.096 ******** 2026-03-28 00:43:26.357116 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:26.357126 | orchestrator | 2026-03-28 00:43:26.357136 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-28 00:43:26.357146 | orchestrator | Saturday 28 March 2026 00:43:21 +0000 (0:00:00.357) 0:00:45.454 ******** 2026-03-28 00:43:26.357156 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:26.357165 | orchestrator | 2026-03-28 00:43:26.357175 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-28 00:43:26.357185 | orchestrator | Saturday 28 March 2026 00:43:21 +0000 (0:00:00.140) 0:00:45.594 ******** 2026-03-28 00:43:26.357194 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:26.357204 | orchestrator | 2026-03-28 00:43:26.357213 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-28 00:43:26.357223 | orchestrator | Saturday 28 March 2026 00:43:22 +0000 (0:00:00.134) 0:00:45.729 ******** 2026-03-28 00:43:26.357233 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:26.357245 | orchestrator | 2026-03-28 00:43:26.357262 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-28 00:43:26.357279 | orchestrator | Saturday 28 March 2026 00:43:22 +0000 (0:00:00.145) 0:00:45.875 ******** 2026-03-28 00:43:26.357295 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:26.357312 | orchestrator | 2026-03-28 00:43:26.357328 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-28 00:43:26.357346 | orchestrator | Saturday 28 March 2026 00:43:22 +0000 (0:00:00.122) 0:00:45.998 ******** 2026-03-28 00:43:26.357363 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:26.357382 | orchestrator | 2026-03-28 00:43:26.357430 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-28 00:43:26.357457 | orchestrator | Saturday 28 March 2026 00:43:22 +0000 (0:00:00.138) 0:00:46.136 ******** 2026-03-28 00:43:26.357480 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:26.357510 | orchestrator | 2026-03-28 00:43:26.357604 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-28 00:43:26.357632 | orchestrator | Saturday 28 March 2026 00:43:22 +0000 (0:00:00.141) 0:00:46.277 ******** 2026-03-28 00:43:26.357664 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:26.357732 | orchestrator | 2026-03-28 00:43:26.357767 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-28 00:43:26.357786 | orchestrator | Saturday 28 March 2026 00:43:22 +0000 (0:00:00.142) 0:00:46.420 ******** 2026-03-28 00:43:26.357808 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:26.357834 | orchestrator | 2026-03-28 00:43:26.357854 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-28 00:43:26.357875 | orchestrator | Saturday 28 March 2026 00:43:22 +0000 (0:00:00.143) 0:00:46.563 ******** 2026-03-28 00:43:26.357896 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-95774a3e-10f2-5c5c-866d-eaa2f6123896', 'data_vg': 'ceph-95774a3e-10f2-5c5c-866d-eaa2f6123896'})  2026-03-28 00:43:26.357919 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6126976c-050b-5515-8c81-fb3ee245975b', 'data_vg': 'ceph-6126976c-050b-5515-8c81-fb3ee245975b'})  2026-03-28 00:43:26.357940 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:26.357960 | orchestrator | 2026-03-28 00:43:26.357981 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-28 00:43:26.358000 | orchestrator | Saturday 28 March 2026 00:43:23 +0000 (0:00:00.180) 0:00:46.744 ******** 2026-03-28 00:43:26.358082 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-95774a3e-10f2-5c5c-866d-eaa2f6123896', 'data_vg': 'ceph-95774a3e-10f2-5c5c-866d-eaa2f6123896'})  2026-03-28 00:43:26.358098 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6126976c-050b-5515-8c81-fb3ee245975b', 'data_vg': 'ceph-6126976c-050b-5515-8c81-fb3ee245975b'})  2026-03-28 00:43:26.358109 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:26.358120 | orchestrator | 2026-03-28 00:43:26.358131 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-28 00:43:26.358141 | orchestrator | Saturday 28 March 2026 00:43:23 +0000 (0:00:00.166) 0:00:46.911 ******** 2026-03-28 00:43:26.358153 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-95774a3e-10f2-5c5c-866d-eaa2f6123896', 'data_vg': 'ceph-95774a3e-10f2-5c5c-866d-eaa2f6123896'})  2026-03-28 00:43:26.358164 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6126976c-050b-5515-8c81-fb3ee245975b', 'data_vg': 'ceph-6126976c-050b-5515-8c81-fb3ee245975b'})  2026-03-28 00:43:26.358174 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:26.358185 | orchestrator | 2026-03-28 00:43:26.358196 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-28 00:43:26.358207 | orchestrator | Saturday 28 March 2026 00:43:23 +0000 (0:00:00.165) 0:00:47.076 ******** 2026-03-28 00:43:26.358218 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-95774a3e-10f2-5c5c-866d-eaa2f6123896', 'data_vg': 'ceph-95774a3e-10f2-5c5c-866d-eaa2f6123896'})  2026-03-28 00:43:26.358229 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6126976c-050b-5515-8c81-fb3ee245975b', 'data_vg': 'ceph-6126976c-050b-5515-8c81-fb3ee245975b'})  2026-03-28 00:43:26.358241 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:26.358251 | orchestrator | 2026-03-28 00:43:26.358287 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-28 00:43:26.358299 | orchestrator | Saturday 28 March 2026 00:43:23 +0000 (0:00:00.409) 0:00:47.485 ******** 2026-03-28 00:43:26.358310 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-95774a3e-10f2-5c5c-866d-eaa2f6123896', 'data_vg': 'ceph-95774a3e-10f2-5c5c-866d-eaa2f6123896'})  2026-03-28 00:43:26.358321 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6126976c-050b-5515-8c81-fb3ee245975b', 'data_vg': 'ceph-6126976c-050b-5515-8c81-fb3ee245975b'})  2026-03-28 00:43:26.358332 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:26.358342 | orchestrator | 2026-03-28 00:43:26.358353 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-28 00:43:26.358364 | orchestrator | Saturday 28 March 2026 00:43:24 +0000 (0:00:00.179) 0:00:47.665 ******** 2026-03-28 00:43:26.358388 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-95774a3e-10f2-5c5c-866d-eaa2f6123896', 'data_vg': 'ceph-95774a3e-10f2-5c5c-866d-eaa2f6123896'})  2026-03-28 00:43:26.358407 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6126976c-050b-5515-8c81-fb3ee245975b', 'data_vg': 'ceph-6126976c-050b-5515-8c81-fb3ee245975b'})  2026-03-28 00:43:26.358419 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:26.358429 | orchestrator | 2026-03-28 00:43:26.358440 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-28 00:43:26.358451 | orchestrator | Saturday 28 March 2026 00:43:24 +0000 (0:00:00.202) 0:00:47.867 ******** 2026-03-28 00:43:26.358461 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-95774a3e-10f2-5c5c-866d-eaa2f6123896', 'data_vg': 'ceph-95774a3e-10f2-5c5c-866d-eaa2f6123896'})  2026-03-28 00:43:26.358472 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6126976c-050b-5515-8c81-fb3ee245975b', 'data_vg': 'ceph-6126976c-050b-5515-8c81-fb3ee245975b'})  2026-03-28 00:43:26.358483 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:26.358494 | orchestrator | 2026-03-28 00:43:26.358504 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-28 00:43:26.358515 | orchestrator | Saturday 28 March 2026 00:43:24 +0000 (0:00:00.180) 0:00:48.048 ******** 2026-03-28 00:43:26.358526 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-95774a3e-10f2-5c5c-866d-eaa2f6123896', 'data_vg': 'ceph-95774a3e-10f2-5c5c-866d-eaa2f6123896'})  2026-03-28 00:43:26.358564 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6126976c-050b-5515-8c81-fb3ee245975b', 'data_vg': 'ceph-6126976c-050b-5515-8c81-fb3ee245975b'})  2026-03-28 00:43:26.358576 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:26.358586 | orchestrator | 2026-03-28 00:43:26.358597 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-28 00:43:26.358608 | orchestrator | Saturday 28 March 2026 00:43:24 +0000 (0:00:00.205) 0:00:48.253 ******** 2026-03-28 00:43:26.358619 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:43:26.358630 | orchestrator | 2026-03-28 00:43:26.358640 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-28 00:43:26.358652 | orchestrator | Saturday 28 March 2026 00:43:25 +0000 (0:00:00.543) 0:00:48.797 ******** 2026-03-28 00:43:26.358662 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:43:26.358673 | orchestrator | 2026-03-28 00:43:26.358684 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-28 00:43:26.358695 | orchestrator | Saturday 28 March 2026 00:43:25 +0000 (0:00:00.511) 0:00:49.308 ******** 2026-03-28 00:43:26.358706 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:43:26.358716 | orchestrator | 2026-03-28 00:43:26.358727 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-28 00:43:26.358738 | orchestrator | Saturday 28 March 2026 00:43:25 +0000 (0:00:00.162) 0:00:49.471 ******** 2026-03-28 00:43:26.358749 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-6126976c-050b-5515-8c81-fb3ee245975b', 'vg_name': 'ceph-6126976c-050b-5515-8c81-fb3ee245975b'}) 2026-03-28 00:43:26.358761 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-95774a3e-10f2-5c5c-866d-eaa2f6123896', 'vg_name': 'ceph-95774a3e-10f2-5c5c-866d-eaa2f6123896'}) 2026-03-28 00:43:26.358772 | orchestrator | 2026-03-28 00:43:26.358783 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-28 00:43:26.358793 | orchestrator | Saturday 28 March 2026 00:43:26 +0000 (0:00:00.214) 0:00:49.687 ******** 2026-03-28 00:43:26.358804 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-95774a3e-10f2-5c5c-866d-eaa2f6123896', 'data_vg': 'ceph-95774a3e-10f2-5c5c-866d-eaa2f6123896'})  2026-03-28 00:43:26.358815 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6126976c-050b-5515-8c81-fb3ee245975b', 'data_vg': 'ceph-6126976c-050b-5515-8c81-fb3ee245975b'})  2026-03-28 00:43:26.358826 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:26.358845 | orchestrator | 2026-03-28 00:43:26.358856 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-28 00:43:26.358867 | orchestrator | Saturday 28 March 2026 00:43:26 +0000 (0:00:00.200) 0:00:49.887 ******** 2026-03-28 00:43:26.358878 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-95774a3e-10f2-5c5c-866d-eaa2f6123896', 'data_vg': 'ceph-95774a3e-10f2-5c5c-866d-eaa2f6123896'})  2026-03-28 00:43:26.358896 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6126976c-050b-5515-8c81-fb3ee245975b', 'data_vg': 'ceph-6126976c-050b-5515-8c81-fb3ee245975b'})  2026-03-28 00:43:32.745161 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:32.745261 | orchestrator | 2026-03-28 00:43:32.745277 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-28 00:43:32.745290 | orchestrator | Saturday 28 March 2026 00:43:26 +0000 (0:00:00.177) 0:00:50.065 ******** 2026-03-28 00:43:32.745300 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-95774a3e-10f2-5c5c-866d-eaa2f6123896', 'data_vg': 'ceph-95774a3e-10f2-5c5c-866d-eaa2f6123896'})  2026-03-28 00:43:32.745312 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6126976c-050b-5515-8c81-fb3ee245975b', 'data_vg': 'ceph-6126976c-050b-5515-8c81-fb3ee245975b'})  2026-03-28 00:43:32.745322 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:43:32.745332 | orchestrator | 2026-03-28 00:43:32.745342 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-28 00:43:32.745351 | orchestrator | Saturday 28 March 2026 00:43:26 +0000 (0:00:00.183) 0:00:50.248 ******** 2026-03-28 00:43:32.745361 | orchestrator | ok: [testbed-node-4] => { 2026-03-28 00:43:32.745370 | orchestrator |  "lvm_report": { 2026-03-28 00:43:32.745381 | orchestrator |  "lv": [ 2026-03-28 00:43:32.745405 | orchestrator |  { 2026-03-28 00:43:32.745416 | orchestrator |  "lv_name": "osd-block-6126976c-050b-5515-8c81-fb3ee245975b", 2026-03-28 00:43:32.745426 | orchestrator |  "vg_name": "ceph-6126976c-050b-5515-8c81-fb3ee245975b" 2026-03-28 00:43:32.745436 | orchestrator |  }, 2026-03-28 00:43:32.745445 | orchestrator |  { 2026-03-28 00:43:32.745455 | orchestrator |  "lv_name": "osd-block-95774a3e-10f2-5c5c-866d-eaa2f6123896", 2026-03-28 00:43:32.745464 | orchestrator |  "vg_name": "ceph-95774a3e-10f2-5c5c-866d-eaa2f6123896" 2026-03-28 00:43:32.745474 | orchestrator |  } 2026-03-28 00:43:32.745483 | orchestrator |  ], 2026-03-28 00:43:32.745493 | orchestrator |  "pv": [ 2026-03-28 00:43:32.745502 | orchestrator |  { 2026-03-28 00:43:32.745512 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-28 00:43:32.745521 | orchestrator |  "vg_name": "ceph-95774a3e-10f2-5c5c-866d-eaa2f6123896" 2026-03-28 00:43:32.745576 | orchestrator |  }, 2026-03-28 00:43:32.745590 | orchestrator |  { 2026-03-28 00:43:32.745599 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-28 00:43:32.745609 | orchestrator |  "vg_name": "ceph-6126976c-050b-5515-8c81-fb3ee245975b" 2026-03-28 00:43:32.745619 | orchestrator |  } 2026-03-28 00:43:32.745629 | orchestrator |  ] 2026-03-28 00:43:32.745639 | orchestrator |  } 2026-03-28 00:43:32.745648 | orchestrator | } 2026-03-28 00:43:32.745658 | orchestrator | 2026-03-28 00:43:32.745668 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-28 00:43:32.745677 | orchestrator | 2026-03-28 00:43:32.745687 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-28 00:43:32.745698 | orchestrator | Saturday 28 March 2026 00:43:27 +0000 (0:00:00.561) 0:00:50.810 ******** 2026-03-28 00:43:32.745709 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-28 00:43:32.745720 | orchestrator | 2026-03-28 00:43:32.745731 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-28 00:43:32.745742 | orchestrator | Saturday 28 March 2026 00:43:27 +0000 (0:00:00.293) 0:00:51.104 ******** 2026-03-28 00:43:32.745775 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:43:32.745786 | orchestrator | 2026-03-28 00:43:32.745797 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:32.745808 | orchestrator | Saturday 28 March 2026 00:43:27 +0000 (0:00:00.241) 0:00:51.346 ******** 2026-03-28 00:43:32.745819 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-28 00:43:32.745829 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-28 00:43:32.745839 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-28 00:43:32.745853 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-28 00:43:32.745865 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-28 00:43:32.745875 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-28 00:43:32.745886 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-28 00:43:32.745897 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-28 00:43:32.745908 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-28 00:43:32.745918 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-28 00:43:32.745929 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-28 00:43:32.745940 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-28 00:43:32.745950 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-28 00:43:32.745961 | orchestrator | 2026-03-28 00:43:32.745972 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:32.745983 | orchestrator | Saturday 28 March 2026 00:43:28 +0000 (0:00:00.409) 0:00:51.755 ******** 2026-03-28 00:43:32.745994 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:32.746005 | orchestrator | 2026-03-28 00:43:32.746071 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:32.746082 | orchestrator | Saturday 28 March 2026 00:43:28 +0000 (0:00:00.233) 0:00:51.989 ******** 2026-03-28 00:43:32.746092 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:32.746101 | orchestrator | 2026-03-28 00:43:32.746111 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:32.746170 | orchestrator | Saturday 28 March 2026 00:43:28 +0000 (0:00:00.255) 0:00:52.245 ******** 2026-03-28 00:43:32.746182 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:32.746192 | orchestrator | 2026-03-28 00:43:32.746202 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:32.746212 | orchestrator | Saturday 28 March 2026 00:43:28 +0000 (0:00:00.209) 0:00:52.455 ******** 2026-03-28 00:43:32.746221 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:32.746230 | orchestrator | 2026-03-28 00:43:32.746240 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:32.746249 | orchestrator | Saturday 28 March 2026 00:43:29 +0000 (0:00:00.207) 0:00:52.663 ******** 2026-03-28 00:43:32.746258 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:32.746268 | orchestrator | 2026-03-28 00:43:32.746277 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:32.746287 | orchestrator | Saturday 28 March 2026 00:43:29 +0000 (0:00:00.191) 0:00:52.855 ******** 2026-03-28 00:43:32.746296 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:32.746306 | orchestrator | 2026-03-28 00:43:32.746315 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:32.746325 | orchestrator | Saturday 28 March 2026 00:43:29 +0000 (0:00:00.650) 0:00:53.505 ******** 2026-03-28 00:43:32.746348 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:32.746378 | orchestrator | 2026-03-28 00:43:32.746388 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:32.746398 | orchestrator | Saturday 28 March 2026 00:43:30 +0000 (0:00:00.227) 0:00:53.732 ******** 2026-03-28 00:43:32.746408 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:32.746417 | orchestrator | 2026-03-28 00:43:32.746427 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:32.746437 | orchestrator | Saturday 28 March 2026 00:43:30 +0000 (0:00:00.190) 0:00:53.923 ******** 2026-03-28 00:43:32.746447 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_9304b03c-54d0-4df2-b114-2d3d3345c945) 2026-03-28 00:43:32.746458 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_9304b03c-54d0-4df2-b114-2d3d3345c945) 2026-03-28 00:43:32.746468 | orchestrator | 2026-03-28 00:43:32.746477 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:32.746487 | orchestrator | Saturday 28 March 2026 00:43:30 +0000 (0:00:00.425) 0:00:54.348 ******** 2026-03-28 00:43:32.746497 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_616f32f6-becb-4ce1-b615-c2a0fbaca869) 2026-03-28 00:43:32.746507 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_616f32f6-becb-4ce1-b615-c2a0fbaca869) 2026-03-28 00:43:32.746516 | orchestrator | 2026-03-28 00:43:32.746526 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:32.746592 | orchestrator | Saturday 28 March 2026 00:43:31 +0000 (0:00:00.441) 0:00:54.790 ******** 2026-03-28 00:43:32.746602 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_479351df-b417-42ac-b9cb-d6683c731815) 2026-03-28 00:43:32.746612 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_479351df-b417-42ac-b9cb-d6683c731815) 2026-03-28 00:43:32.746621 | orchestrator | 2026-03-28 00:43:32.746631 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:32.746640 | orchestrator | Saturday 28 March 2026 00:43:31 +0000 (0:00:00.439) 0:00:55.230 ******** 2026-03-28 00:43:32.746650 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_3670b387-e30b-4544-bca5-74e83387707d) 2026-03-28 00:43:32.746660 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_3670b387-e30b-4544-bca5-74e83387707d) 2026-03-28 00:43:32.746669 | orchestrator | 2026-03-28 00:43:32.746679 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-28 00:43:32.746688 | orchestrator | Saturday 28 March 2026 00:43:32 +0000 (0:00:00.444) 0:00:55.675 ******** 2026-03-28 00:43:32.746698 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-28 00:43:32.746708 | orchestrator | 2026-03-28 00:43:32.746717 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:32.746727 | orchestrator | Saturday 28 March 2026 00:43:32 +0000 (0:00:00.346) 0:00:56.021 ******** 2026-03-28 00:43:32.746736 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-28 00:43:32.746745 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-28 00:43:32.746755 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-28 00:43:32.746764 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-28 00:43:32.746774 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-28 00:43:32.746783 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-28 00:43:32.746831 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-28 00:43:32.746842 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-28 00:43:32.746851 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-28 00:43:32.746868 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-28 00:43:32.746878 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-28 00:43:32.746897 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-28 00:43:41.529084 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-28 00:43:41.529147 | orchestrator | 2026-03-28 00:43:41.529156 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:41.529162 | orchestrator | Saturday 28 March 2026 00:43:32 +0000 (0:00:00.422) 0:00:56.443 ******** 2026-03-28 00:43:41.529168 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:41.529175 | orchestrator | 2026-03-28 00:43:41.529181 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:41.529187 | orchestrator | Saturday 28 March 2026 00:43:33 +0000 (0:00:00.239) 0:00:56.683 ******** 2026-03-28 00:43:41.529193 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:41.529198 | orchestrator | 2026-03-28 00:43:41.529204 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:41.529210 | orchestrator | Saturday 28 March 2026 00:43:33 +0000 (0:00:00.200) 0:00:56.883 ******** 2026-03-28 00:43:41.529215 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:41.529221 | orchestrator | 2026-03-28 00:43:41.529227 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:41.529241 | orchestrator | Saturday 28 March 2026 00:43:33 +0000 (0:00:00.684) 0:00:57.567 ******** 2026-03-28 00:43:41.529247 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:41.529253 | orchestrator | 2026-03-28 00:43:41.529258 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:41.529264 | orchestrator | Saturday 28 March 2026 00:43:34 +0000 (0:00:00.209) 0:00:57.777 ******** 2026-03-28 00:43:41.529270 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:41.529275 | orchestrator | 2026-03-28 00:43:41.529281 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:41.529287 | orchestrator | Saturday 28 March 2026 00:43:34 +0000 (0:00:00.204) 0:00:57.981 ******** 2026-03-28 00:43:41.529292 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:41.529298 | orchestrator | 2026-03-28 00:43:41.529304 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:41.529309 | orchestrator | Saturday 28 March 2026 00:43:34 +0000 (0:00:00.187) 0:00:58.168 ******** 2026-03-28 00:43:41.529315 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:41.529321 | orchestrator | 2026-03-28 00:43:41.529326 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:41.529332 | orchestrator | Saturday 28 March 2026 00:43:34 +0000 (0:00:00.209) 0:00:58.378 ******** 2026-03-28 00:43:41.529337 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:41.529343 | orchestrator | 2026-03-28 00:43:41.529349 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:41.529355 | orchestrator | Saturday 28 March 2026 00:43:34 +0000 (0:00:00.193) 0:00:58.572 ******** 2026-03-28 00:43:41.529361 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-28 00:43:41.529367 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-28 00:43:41.529373 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-28 00:43:41.529378 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-28 00:43:41.529384 | orchestrator | 2026-03-28 00:43:41.529390 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:41.529395 | orchestrator | Saturday 28 March 2026 00:43:35 +0000 (0:00:00.692) 0:00:59.264 ******** 2026-03-28 00:43:41.529401 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:41.529407 | orchestrator | 2026-03-28 00:43:41.529412 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:41.529431 | orchestrator | Saturday 28 March 2026 00:43:35 +0000 (0:00:00.193) 0:00:59.457 ******** 2026-03-28 00:43:41.529437 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:41.529443 | orchestrator | 2026-03-28 00:43:41.529449 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:41.529454 | orchestrator | Saturday 28 March 2026 00:43:36 +0000 (0:00:00.220) 0:00:59.678 ******** 2026-03-28 00:43:41.529460 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:41.529465 | orchestrator | 2026-03-28 00:43:41.529471 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-28 00:43:41.529477 | orchestrator | Saturday 28 March 2026 00:43:36 +0000 (0:00:00.199) 0:00:59.877 ******** 2026-03-28 00:43:41.529482 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:41.529488 | orchestrator | 2026-03-28 00:43:41.529493 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-28 00:43:41.529499 | orchestrator | Saturday 28 March 2026 00:43:36 +0000 (0:00:00.247) 0:01:00.125 ******** 2026-03-28 00:43:41.529504 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:41.529510 | orchestrator | 2026-03-28 00:43:41.529516 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-28 00:43:41.529521 | orchestrator | Saturday 28 March 2026 00:43:36 +0000 (0:00:00.135) 0:01:00.260 ******** 2026-03-28 00:43:41.529564 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a9825c53-ea63-5cae-a5f7-e494f125bb8e'}}) 2026-03-28 00:43:41.529573 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8fa92e37-9e8f-5bc1-86de-5e52e5346f3d'}}) 2026-03-28 00:43:41.529579 | orchestrator | 2026-03-28 00:43:41.529584 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-28 00:43:41.529590 | orchestrator | Saturday 28 March 2026 00:43:37 +0000 (0:00:00.401) 0:01:00.662 ******** 2026-03-28 00:43:41.529597 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a9825c53-ea63-5cae-a5f7-e494f125bb8e', 'data_vg': 'ceph-a9825c53-ea63-5cae-a5f7-e494f125bb8e'}) 2026-03-28 00:43:41.529603 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-8fa92e37-9e8f-5bc1-86de-5e52e5346f3d', 'data_vg': 'ceph-8fa92e37-9e8f-5bc1-86de-5e52e5346f3d'}) 2026-03-28 00:43:41.529609 | orchestrator | 2026-03-28 00:43:41.529614 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-28 00:43:41.529630 | orchestrator | Saturday 28 March 2026 00:43:38 +0000 (0:00:01.813) 0:01:02.476 ******** 2026-03-28 00:43:41.529636 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a9825c53-ea63-5cae-a5f7-e494f125bb8e', 'data_vg': 'ceph-a9825c53-ea63-5cae-a5f7-e494f125bb8e'})  2026-03-28 00:43:41.529643 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8fa92e37-9e8f-5bc1-86de-5e52e5346f3d', 'data_vg': 'ceph-8fa92e37-9e8f-5bc1-86de-5e52e5346f3d'})  2026-03-28 00:43:41.529648 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:41.529654 | orchestrator | 2026-03-28 00:43:41.529660 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-28 00:43:41.529666 | orchestrator | Saturday 28 March 2026 00:43:39 +0000 (0:00:00.159) 0:01:02.636 ******** 2026-03-28 00:43:41.529673 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a9825c53-ea63-5cae-a5f7-e494f125bb8e', 'data_vg': 'ceph-a9825c53-ea63-5cae-a5f7-e494f125bb8e'}) 2026-03-28 00:43:41.529683 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-8fa92e37-9e8f-5bc1-86de-5e52e5346f3d', 'data_vg': 'ceph-8fa92e37-9e8f-5bc1-86de-5e52e5346f3d'}) 2026-03-28 00:43:41.529690 | orchestrator | 2026-03-28 00:43:41.529697 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-28 00:43:41.529703 | orchestrator | Saturday 28 March 2026 00:43:40 +0000 (0:00:01.307) 0:01:03.944 ******** 2026-03-28 00:43:41.529710 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a9825c53-ea63-5cae-a5f7-e494f125bb8e', 'data_vg': 'ceph-a9825c53-ea63-5cae-a5f7-e494f125bb8e'})  2026-03-28 00:43:41.529720 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8fa92e37-9e8f-5bc1-86de-5e52e5346f3d', 'data_vg': 'ceph-8fa92e37-9e8f-5bc1-86de-5e52e5346f3d'})  2026-03-28 00:43:41.529727 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:41.529734 | orchestrator | 2026-03-28 00:43:41.529740 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-28 00:43:41.529746 | orchestrator | Saturday 28 March 2026 00:43:40 +0000 (0:00:00.164) 0:01:04.109 ******** 2026-03-28 00:43:41.529752 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:41.529759 | orchestrator | 2026-03-28 00:43:41.529766 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-28 00:43:41.529772 | orchestrator | Saturday 28 March 2026 00:43:40 +0000 (0:00:00.149) 0:01:04.259 ******** 2026-03-28 00:43:41.529779 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a9825c53-ea63-5cae-a5f7-e494f125bb8e', 'data_vg': 'ceph-a9825c53-ea63-5cae-a5f7-e494f125bb8e'})  2026-03-28 00:43:41.529785 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8fa92e37-9e8f-5bc1-86de-5e52e5346f3d', 'data_vg': 'ceph-8fa92e37-9e8f-5bc1-86de-5e52e5346f3d'})  2026-03-28 00:43:41.529792 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:41.529798 | orchestrator | 2026-03-28 00:43:41.529804 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-28 00:43:41.529811 | orchestrator | Saturday 28 March 2026 00:43:40 +0000 (0:00:00.153) 0:01:04.412 ******** 2026-03-28 00:43:41.529817 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:41.529823 | orchestrator | 2026-03-28 00:43:41.529830 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-28 00:43:41.529836 | orchestrator | Saturday 28 March 2026 00:43:40 +0000 (0:00:00.135) 0:01:04.548 ******** 2026-03-28 00:43:41.529842 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a9825c53-ea63-5cae-a5f7-e494f125bb8e', 'data_vg': 'ceph-a9825c53-ea63-5cae-a5f7-e494f125bb8e'})  2026-03-28 00:43:41.529849 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8fa92e37-9e8f-5bc1-86de-5e52e5346f3d', 'data_vg': 'ceph-8fa92e37-9e8f-5bc1-86de-5e52e5346f3d'})  2026-03-28 00:43:41.529855 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:41.529862 | orchestrator | 2026-03-28 00:43:41.529868 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-28 00:43:41.529875 | orchestrator | Saturday 28 March 2026 00:43:41 +0000 (0:00:00.136) 0:01:04.685 ******** 2026-03-28 00:43:41.529881 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:41.529888 | orchestrator | 2026-03-28 00:43:41.529895 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-28 00:43:41.529901 | orchestrator | Saturday 28 March 2026 00:43:41 +0000 (0:00:00.131) 0:01:04.816 ******** 2026-03-28 00:43:41.529908 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a9825c53-ea63-5cae-a5f7-e494f125bb8e', 'data_vg': 'ceph-a9825c53-ea63-5cae-a5f7-e494f125bb8e'})  2026-03-28 00:43:41.529914 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8fa92e37-9e8f-5bc1-86de-5e52e5346f3d', 'data_vg': 'ceph-8fa92e37-9e8f-5bc1-86de-5e52e5346f3d'})  2026-03-28 00:43:41.529921 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:41.529927 | orchestrator | 2026-03-28 00:43:41.529934 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-28 00:43:41.529941 | orchestrator | Saturday 28 March 2026 00:43:41 +0000 (0:00:00.135) 0:01:04.952 ******** 2026-03-28 00:43:41.529947 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:43:41.529954 | orchestrator | 2026-03-28 00:43:41.529961 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-28 00:43:41.529967 | orchestrator | Saturday 28 March 2026 00:43:41 +0000 (0:00:00.125) 0:01:05.078 ******** 2026-03-28 00:43:41.529978 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a9825c53-ea63-5cae-a5f7-e494f125bb8e', 'data_vg': 'ceph-a9825c53-ea63-5cae-a5f7-e494f125bb8e'})  2026-03-28 00:43:47.553860 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8fa92e37-9e8f-5bc1-86de-5e52e5346f3d', 'data_vg': 'ceph-8fa92e37-9e8f-5bc1-86de-5e52e5346f3d'})  2026-03-28 00:43:47.553959 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:47.553973 | orchestrator | 2026-03-28 00:43:47.553982 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-28 00:43:47.553992 | orchestrator | Saturday 28 March 2026 00:43:41 +0000 (0:00:00.305) 0:01:05.384 ******** 2026-03-28 00:43:47.554000 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a9825c53-ea63-5cae-a5f7-e494f125bb8e', 'data_vg': 'ceph-a9825c53-ea63-5cae-a5f7-e494f125bb8e'})  2026-03-28 00:43:47.554009 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8fa92e37-9e8f-5bc1-86de-5e52e5346f3d', 'data_vg': 'ceph-8fa92e37-9e8f-5bc1-86de-5e52e5346f3d'})  2026-03-28 00:43:47.554062 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:47.554072 | orchestrator | 2026-03-28 00:43:47.554095 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-28 00:43:47.554104 | orchestrator | Saturday 28 March 2026 00:43:41 +0000 (0:00:00.143) 0:01:05.527 ******** 2026-03-28 00:43:47.554112 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a9825c53-ea63-5cae-a5f7-e494f125bb8e', 'data_vg': 'ceph-a9825c53-ea63-5cae-a5f7-e494f125bb8e'})  2026-03-28 00:43:47.554120 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8fa92e37-9e8f-5bc1-86de-5e52e5346f3d', 'data_vg': 'ceph-8fa92e37-9e8f-5bc1-86de-5e52e5346f3d'})  2026-03-28 00:43:47.554128 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:47.554136 | orchestrator | 2026-03-28 00:43:47.554144 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-28 00:43:47.554152 | orchestrator | Saturday 28 March 2026 00:43:42 +0000 (0:00:00.131) 0:01:05.659 ******** 2026-03-28 00:43:47.554170 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:47.554178 | orchestrator | 2026-03-28 00:43:47.554186 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-28 00:43:47.554194 | orchestrator | Saturday 28 March 2026 00:43:42 +0000 (0:00:00.130) 0:01:05.789 ******** 2026-03-28 00:43:47.554202 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:47.554210 | orchestrator | 2026-03-28 00:43:47.554218 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-28 00:43:47.554226 | orchestrator | Saturday 28 March 2026 00:43:42 +0000 (0:00:00.137) 0:01:05.927 ******** 2026-03-28 00:43:47.554234 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:47.554242 | orchestrator | 2026-03-28 00:43:47.554250 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-28 00:43:47.554258 | orchestrator | Saturday 28 March 2026 00:43:42 +0000 (0:00:00.123) 0:01:06.051 ******** 2026-03-28 00:43:47.554266 | orchestrator | ok: [testbed-node-5] => { 2026-03-28 00:43:47.554275 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-28 00:43:47.554283 | orchestrator | } 2026-03-28 00:43:47.554291 | orchestrator | 2026-03-28 00:43:47.554299 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-28 00:43:47.554307 | orchestrator | Saturday 28 March 2026 00:43:42 +0000 (0:00:00.132) 0:01:06.183 ******** 2026-03-28 00:43:47.554315 | orchestrator | ok: [testbed-node-5] => { 2026-03-28 00:43:47.554323 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-28 00:43:47.554331 | orchestrator | } 2026-03-28 00:43:47.554339 | orchestrator | 2026-03-28 00:43:47.554347 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-28 00:43:47.554355 | orchestrator | Saturday 28 March 2026 00:43:42 +0000 (0:00:00.120) 0:01:06.303 ******** 2026-03-28 00:43:47.554363 | orchestrator | ok: [testbed-node-5] => { 2026-03-28 00:43:47.554371 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-28 00:43:47.554379 | orchestrator | } 2026-03-28 00:43:47.554387 | orchestrator | 2026-03-28 00:43:47.554395 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-28 00:43:47.554403 | orchestrator | Saturday 28 March 2026 00:43:42 +0000 (0:00:00.133) 0:01:06.437 ******** 2026-03-28 00:43:47.554432 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:43:47.554442 | orchestrator | 2026-03-28 00:43:47.554451 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-28 00:43:47.554460 | orchestrator | Saturday 28 March 2026 00:43:43 +0000 (0:00:00.491) 0:01:06.928 ******** 2026-03-28 00:43:47.554469 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:43:47.554478 | orchestrator | 2026-03-28 00:43:47.554487 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-28 00:43:47.554496 | orchestrator | Saturday 28 March 2026 00:43:43 +0000 (0:00:00.503) 0:01:07.431 ******** 2026-03-28 00:43:47.554505 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:43:47.554514 | orchestrator | 2026-03-28 00:43:47.554541 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-28 00:43:47.554551 | orchestrator | Saturday 28 March 2026 00:43:44 +0000 (0:00:00.491) 0:01:07.922 ******** 2026-03-28 00:43:47.554560 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:43:47.554569 | orchestrator | 2026-03-28 00:43:47.554578 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-28 00:43:47.554587 | orchestrator | Saturday 28 March 2026 00:43:44 +0000 (0:00:00.291) 0:01:08.214 ******** 2026-03-28 00:43:47.554596 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:47.554606 | orchestrator | 2026-03-28 00:43:47.554615 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-28 00:43:47.554623 | orchestrator | Saturday 28 March 2026 00:43:44 +0000 (0:00:00.111) 0:01:08.325 ******** 2026-03-28 00:43:47.554632 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:47.554641 | orchestrator | 2026-03-28 00:43:47.554651 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-28 00:43:47.554660 | orchestrator | Saturday 28 March 2026 00:43:44 +0000 (0:00:00.114) 0:01:08.439 ******** 2026-03-28 00:43:47.554669 | orchestrator | ok: [testbed-node-5] => { 2026-03-28 00:43:47.554678 | orchestrator |  "vgs_report": { 2026-03-28 00:43:47.554687 | orchestrator |  "vg": [] 2026-03-28 00:43:47.554710 | orchestrator |  } 2026-03-28 00:43:47.554720 | orchestrator | } 2026-03-28 00:43:47.554728 | orchestrator | 2026-03-28 00:43:47.554737 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-28 00:43:47.554746 | orchestrator | Saturday 28 March 2026 00:43:44 +0000 (0:00:00.135) 0:01:08.575 ******** 2026-03-28 00:43:47.554755 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:47.554764 | orchestrator | 2026-03-28 00:43:47.554774 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-28 00:43:47.554782 | orchestrator | Saturday 28 March 2026 00:43:45 +0000 (0:00:00.134) 0:01:08.709 ******** 2026-03-28 00:43:47.554790 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:47.554798 | orchestrator | 2026-03-28 00:43:47.554806 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-28 00:43:47.554814 | orchestrator | Saturday 28 March 2026 00:43:45 +0000 (0:00:00.115) 0:01:08.825 ******** 2026-03-28 00:43:47.554821 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:47.554829 | orchestrator | 2026-03-28 00:43:47.554837 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-28 00:43:47.554845 | orchestrator | Saturday 28 March 2026 00:43:45 +0000 (0:00:00.126) 0:01:08.952 ******** 2026-03-28 00:43:47.554853 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:47.554861 | orchestrator | 2026-03-28 00:43:47.554869 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-28 00:43:47.554876 | orchestrator | Saturday 28 March 2026 00:43:45 +0000 (0:00:00.156) 0:01:09.108 ******** 2026-03-28 00:43:47.554884 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:47.554892 | orchestrator | 2026-03-28 00:43:47.554900 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-28 00:43:47.554908 | orchestrator | Saturday 28 March 2026 00:43:45 +0000 (0:00:00.128) 0:01:09.237 ******** 2026-03-28 00:43:47.554915 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:47.554930 | orchestrator | 2026-03-28 00:43:47.554938 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-28 00:43:47.554946 | orchestrator | Saturday 28 March 2026 00:43:45 +0000 (0:00:00.128) 0:01:09.366 ******** 2026-03-28 00:43:47.554953 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:47.554961 | orchestrator | 2026-03-28 00:43:47.554969 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-28 00:43:47.554977 | orchestrator | Saturday 28 March 2026 00:43:45 +0000 (0:00:00.144) 0:01:09.510 ******** 2026-03-28 00:43:47.554985 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:47.554993 | orchestrator | 2026-03-28 00:43:47.555000 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-28 00:43:47.555008 | orchestrator | Saturday 28 March 2026 00:43:46 +0000 (0:00:00.131) 0:01:09.642 ******** 2026-03-28 00:43:47.555016 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:47.555024 | orchestrator | 2026-03-28 00:43:47.555032 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-28 00:43:47.555040 | orchestrator | Saturday 28 March 2026 00:43:46 +0000 (0:00:00.369) 0:01:10.011 ******** 2026-03-28 00:43:47.555048 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:47.555056 | orchestrator | 2026-03-28 00:43:47.555064 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-28 00:43:47.555071 | orchestrator | Saturday 28 March 2026 00:43:46 +0000 (0:00:00.145) 0:01:10.157 ******** 2026-03-28 00:43:47.555079 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:47.555087 | orchestrator | 2026-03-28 00:43:47.555095 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-28 00:43:47.555103 | orchestrator | Saturday 28 March 2026 00:43:46 +0000 (0:00:00.146) 0:01:10.303 ******** 2026-03-28 00:43:47.555110 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:47.555118 | orchestrator | 2026-03-28 00:43:47.555126 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-28 00:43:47.555134 | orchestrator | Saturday 28 March 2026 00:43:46 +0000 (0:00:00.149) 0:01:10.452 ******** 2026-03-28 00:43:47.555142 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:47.555149 | orchestrator | 2026-03-28 00:43:47.555157 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-28 00:43:47.555165 | orchestrator | Saturday 28 March 2026 00:43:47 +0000 (0:00:00.178) 0:01:10.631 ******** 2026-03-28 00:43:47.555173 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:47.555181 | orchestrator | 2026-03-28 00:43:47.555188 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-28 00:43:47.555196 | orchestrator | Saturday 28 March 2026 00:43:47 +0000 (0:00:00.144) 0:01:10.775 ******** 2026-03-28 00:43:47.555204 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a9825c53-ea63-5cae-a5f7-e494f125bb8e', 'data_vg': 'ceph-a9825c53-ea63-5cae-a5f7-e494f125bb8e'})  2026-03-28 00:43:47.555212 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8fa92e37-9e8f-5bc1-86de-5e52e5346f3d', 'data_vg': 'ceph-8fa92e37-9e8f-5bc1-86de-5e52e5346f3d'})  2026-03-28 00:43:47.555220 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:47.555228 | orchestrator | 2026-03-28 00:43:47.555236 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-28 00:43:47.555244 | orchestrator | Saturday 28 March 2026 00:43:47 +0000 (0:00:00.158) 0:01:10.933 ******** 2026-03-28 00:43:47.555258 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a9825c53-ea63-5cae-a5f7-e494f125bb8e', 'data_vg': 'ceph-a9825c53-ea63-5cae-a5f7-e494f125bb8e'})  2026-03-28 00:43:47.555267 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8fa92e37-9e8f-5bc1-86de-5e52e5346f3d', 'data_vg': 'ceph-8fa92e37-9e8f-5bc1-86de-5e52e5346f3d'})  2026-03-28 00:43:47.555275 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:47.555283 | orchestrator | 2026-03-28 00:43:47.555291 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-28 00:43:47.555304 | orchestrator | Saturday 28 March 2026 00:43:47 +0000 (0:00:00.170) 0:01:11.104 ******** 2026-03-28 00:43:47.555318 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a9825c53-ea63-5cae-a5f7-e494f125bb8e', 'data_vg': 'ceph-a9825c53-ea63-5cae-a5f7-e494f125bb8e'})  2026-03-28 00:43:50.750178 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8fa92e37-9e8f-5bc1-86de-5e52e5346f3d', 'data_vg': 'ceph-8fa92e37-9e8f-5bc1-86de-5e52e5346f3d'})  2026-03-28 00:43:50.750284 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:50.750300 | orchestrator | 2026-03-28 00:43:50.750312 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-28 00:43:50.750325 | orchestrator | Saturday 28 March 2026 00:43:47 +0000 (0:00:00.155) 0:01:11.259 ******** 2026-03-28 00:43:50.750336 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a9825c53-ea63-5cae-a5f7-e494f125bb8e', 'data_vg': 'ceph-a9825c53-ea63-5cae-a5f7-e494f125bb8e'})  2026-03-28 00:43:50.750363 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8fa92e37-9e8f-5bc1-86de-5e52e5346f3d', 'data_vg': 'ceph-8fa92e37-9e8f-5bc1-86de-5e52e5346f3d'})  2026-03-28 00:43:50.750374 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:50.750385 | orchestrator | 2026-03-28 00:43:50.750396 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-28 00:43:50.750407 | orchestrator | Saturday 28 March 2026 00:43:47 +0000 (0:00:00.151) 0:01:11.411 ******** 2026-03-28 00:43:50.750417 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a9825c53-ea63-5cae-a5f7-e494f125bb8e', 'data_vg': 'ceph-a9825c53-ea63-5cae-a5f7-e494f125bb8e'})  2026-03-28 00:43:50.750428 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8fa92e37-9e8f-5bc1-86de-5e52e5346f3d', 'data_vg': 'ceph-8fa92e37-9e8f-5bc1-86de-5e52e5346f3d'})  2026-03-28 00:43:50.750439 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:50.750450 | orchestrator | 2026-03-28 00:43:50.750461 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-28 00:43:50.750471 | orchestrator | Saturday 28 March 2026 00:43:47 +0000 (0:00:00.185) 0:01:11.597 ******** 2026-03-28 00:43:50.750482 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a9825c53-ea63-5cae-a5f7-e494f125bb8e', 'data_vg': 'ceph-a9825c53-ea63-5cae-a5f7-e494f125bb8e'})  2026-03-28 00:43:50.750493 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8fa92e37-9e8f-5bc1-86de-5e52e5346f3d', 'data_vg': 'ceph-8fa92e37-9e8f-5bc1-86de-5e52e5346f3d'})  2026-03-28 00:43:50.750503 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:50.750514 | orchestrator | 2026-03-28 00:43:50.750591 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-28 00:43:50.750619 | orchestrator | Saturday 28 March 2026 00:43:48 +0000 (0:00:00.164) 0:01:11.761 ******** 2026-03-28 00:43:50.750631 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a9825c53-ea63-5cae-a5f7-e494f125bb8e', 'data_vg': 'ceph-a9825c53-ea63-5cae-a5f7-e494f125bb8e'})  2026-03-28 00:43:50.750641 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8fa92e37-9e8f-5bc1-86de-5e52e5346f3d', 'data_vg': 'ceph-8fa92e37-9e8f-5bc1-86de-5e52e5346f3d'})  2026-03-28 00:43:50.750652 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:50.750674 | orchestrator | 2026-03-28 00:43:50.750685 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-28 00:43:50.750696 | orchestrator | Saturday 28 March 2026 00:43:48 +0000 (0:00:00.378) 0:01:12.140 ******** 2026-03-28 00:43:50.750707 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a9825c53-ea63-5cae-a5f7-e494f125bb8e', 'data_vg': 'ceph-a9825c53-ea63-5cae-a5f7-e494f125bb8e'})  2026-03-28 00:43:50.750717 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8fa92e37-9e8f-5bc1-86de-5e52e5346f3d', 'data_vg': 'ceph-8fa92e37-9e8f-5bc1-86de-5e52e5346f3d'})  2026-03-28 00:43:50.750728 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:50.750764 | orchestrator | 2026-03-28 00:43:50.750776 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-28 00:43:50.750787 | orchestrator | Saturday 28 March 2026 00:43:48 +0000 (0:00:00.180) 0:01:12.321 ******** 2026-03-28 00:43:50.750797 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:43:50.750809 | orchestrator | 2026-03-28 00:43:50.750820 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-28 00:43:50.750831 | orchestrator | Saturday 28 March 2026 00:43:49 +0000 (0:00:00.509) 0:01:12.830 ******** 2026-03-28 00:43:50.750841 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:43:50.750852 | orchestrator | 2026-03-28 00:43:50.750863 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-28 00:43:50.750873 | orchestrator | Saturday 28 March 2026 00:43:49 +0000 (0:00:00.540) 0:01:13.371 ******** 2026-03-28 00:43:50.750884 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:43:50.750894 | orchestrator | 2026-03-28 00:43:50.750905 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-28 00:43:50.750916 | orchestrator | Saturday 28 March 2026 00:43:49 +0000 (0:00:00.159) 0:01:13.530 ******** 2026-03-28 00:43:50.750927 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-8fa92e37-9e8f-5bc1-86de-5e52e5346f3d', 'vg_name': 'ceph-8fa92e37-9e8f-5bc1-86de-5e52e5346f3d'}) 2026-03-28 00:43:50.750939 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-a9825c53-ea63-5cae-a5f7-e494f125bb8e', 'vg_name': 'ceph-a9825c53-ea63-5cae-a5f7-e494f125bb8e'}) 2026-03-28 00:43:50.750950 | orchestrator | 2026-03-28 00:43:50.750961 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-28 00:43:50.750971 | orchestrator | Saturday 28 March 2026 00:43:50 +0000 (0:00:00.168) 0:01:13.699 ******** 2026-03-28 00:43:50.750999 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a9825c53-ea63-5cae-a5f7-e494f125bb8e', 'data_vg': 'ceph-a9825c53-ea63-5cae-a5f7-e494f125bb8e'})  2026-03-28 00:43:50.751011 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8fa92e37-9e8f-5bc1-86de-5e52e5346f3d', 'data_vg': 'ceph-8fa92e37-9e8f-5bc1-86de-5e52e5346f3d'})  2026-03-28 00:43:50.751021 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:50.751032 | orchestrator | 2026-03-28 00:43:50.751043 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-28 00:43:50.751053 | orchestrator | Saturday 28 March 2026 00:43:50 +0000 (0:00:00.176) 0:01:13.876 ******** 2026-03-28 00:43:50.751070 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a9825c53-ea63-5cae-a5f7-e494f125bb8e', 'data_vg': 'ceph-a9825c53-ea63-5cae-a5f7-e494f125bb8e'})  2026-03-28 00:43:50.751081 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8fa92e37-9e8f-5bc1-86de-5e52e5346f3d', 'data_vg': 'ceph-8fa92e37-9e8f-5bc1-86de-5e52e5346f3d'})  2026-03-28 00:43:50.751092 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:50.751102 | orchestrator | 2026-03-28 00:43:50.751113 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-28 00:43:50.751123 | orchestrator | Saturday 28 March 2026 00:43:50 +0000 (0:00:00.150) 0:01:14.026 ******** 2026-03-28 00:43:50.751134 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a9825c53-ea63-5cae-a5f7-e494f125bb8e', 'data_vg': 'ceph-a9825c53-ea63-5cae-a5f7-e494f125bb8e'})  2026-03-28 00:43:50.751145 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8fa92e37-9e8f-5bc1-86de-5e52e5346f3d', 'data_vg': 'ceph-8fa92e37-9e8f-5bc1-86de-5e52e5346f3d'})  2026-03-28 00:43:50.751155 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:43:50.751166 | orchestrator | 2026-03-28 00:43:50.751176 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-28 00:43:50.751187 | orchestrator | Saturday 28 March 2026 00:43:50 +0000 (0:00:00.166) 0:01:14.193 ******** 2026-03-28 00:43:50.751197 | orchestrator | ok: [testbed-node-5] => { 2026-03-28 00:43:50.751208 | orchestrator |  "lvm_report": { 2026-03-28 00:43:50.751219 | orchestrator |  "lv": [ 2026-03-28 00:43:50.751237 | orchestrator |  { 2026-03-28 00:43:50.751248 | orchestrator |  "lv_name": "osd-block-8fa92e37-9e8f-5bc1-86de-5e52e5346f3d", 2026-03-28 00:43:50.751259 | orchestrator |  "vg_name": "ceph-8fa92e37-9e8f-5bc1-86de-5e52e5346f3d" 2026-03-28 00:43:50.751270 | orchestrator |  }, 2026-03-28 00:43:50.751280 | orchestrator |  { 2026-03-28 00:43:50.751291 | orchestrator |  "lv_name": "osd-block-a9825c53-ea63-5cae-a5f7-e494f125bb8e", 2026-03-28 00:43:50.751327 | orchestrator |  "vg_name": "ceph-a9825c53-ea63-5cae-a5f7-e494f125bb8e" 2026-03-28 00:43:50.751338 | orchestrator |  } 2026-03-28 00:43:50.751349 | orchestrator |  ], 2026-03-28 00:43:50.751359 | orchestrator |  "pv": [ 2026-03-28 00:43:50.751370 | orchestrator |  { 2026-03-28 00:43:50.751381 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-28 00:43:50.751392 | orchestrator |  "vg_name": "ceph-a9825c53-ea63-5cae-a5f7-e494f125bb8e" 2026-03-28 00:43:50.751415 | orchestrator |  }, 2026-03-28 00:43:50.751438 | orchestrator |  { 2026-03-28 00:43:50.751461 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-28 00:43:50.751472 | orchestrator |  "vg_name": "ceph-8fa92e37-9e8f-5bc1-86de-5e52e5346f3d" 2026-03-28 00:43:50.751482 | orchestrator |  } 2026-03-28 00:43:50.751493 | orchestrator |  ] 2026-03-28 00:43:50.751503 | orchestrator |  } 2026-03-28 00:43:50.751514 | orchestrator | } 2026-03-28 00:43:50.751552 | orchestrator | 2026-03-28 00:43:50.751564 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:43:50.751575 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-28 00:43:50.751586 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-28 00:43:50.751597 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-28 00:43:50.751607 | orchestrator | 2026-03-28 00:43:50.751618 | orchestrator | 2026-03-28 00:43:50.751629 | orchestrator | 2026-03-28 00:43:50.751639 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:43:50.751650 | orchestrator | Saturday 28 March 2026 00:43:50 +0000 (0:00:00.154) 0:01:14.348 ******** 2026-03-28 00:43:50.751661 | orchestrator | =============================================================================== 2026-03-28 00:43:50.751671 | orchestrator | Create block VGs -------------------------------------------------------- 5.56s 2026-03-28 00:43:50.751682 | orchestrator | Create block LVs -------------------------------------------------------- 4.03s 2026-03-28 00:43:50.751692 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.90s 2026-03-28 00:43:50.751703 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.61s 2026-03-28 00:43:50.751714 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.59s 2026-03-28 00:43:50.751724 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.54s 2026-03-28 00:43:50.751734 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.54s 2026-03-28 00:43:50.751745 | orchestrator | Add known partitions to the list of available block devices ------------- 1.51s 2026-03-28 00:43:50.751763 | orchestrator | Add known links to the list of available block devices ------------------ 1.24s 2026-03-28 00:43:51.181575 | orchestrator | Add known partitions to the list of available block devices ------------- 1.11s 2026-03-28 00:43:51.181678 | orchestrator | Print LVM report data --------------------------------------------------- 0.97s 2026-03-28 00:43:51.181693 | orchestrator | Add known partitions to the list of available block devices ------------- 0.96s 2026-03-28 00:43:51.181704 | orchestrator | Add known links to the list of available block devices ------------------ 0.85s 2026-03-28 00:43:51.181715 | orchestrator | Create dict of block VGs -> PVs from ceph_osd_devices ------------------- 0.84s 2026-03-28 00:43:51.181754 | orchestrator | Add known links to the list of available block devices ------------------ 0.83s 2026-03-28 00:43:51.181765 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.78s 2026-03-28 00:43:51.181793 | orchestrator | Print number of OSDs wanted per DB VG ----------------------------------- 0.75s 2026-03-28 00:43:51.181804 | orchestrator | Print 'Create WAL LVs for ceph_wal_devices' ----------------------------- 0.74s 2026-03-28 00:43:51.181814 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.73s 2026-03-28 00:43:51.181825 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.72s 2026-03-28 00:44:02.755788 | orchestrator | 2026-03-28 00:44:02 | INFO  | Prepare task for execution of facts. 2026-03-28 00:44:02.838108 | orchestrator | 2026-03-28 00:44:02 | INFO  | Task 610431b1-9b8a-4ed0-a59e-d94f452d481f (facts) was prepared for execution. 2026-03-28 00:44:02.838223 | orchestrator | 2026-03-28 00:44:02 | INFO  | It takes a moment until task 610431b1-9b8a-4ed0-a59e-d94f452d481f (facts) has been started and output is visible here. 2026-03-28 00:44:15.566145 | orchestrator | 2026-03-28 00:44:15.566288 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-28 00:44:15.566312 | orchestrator | 2026-03-28 00:44:15.566331 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-28 00:44:15.566349 | orchestrator | Saturday 28 March 2026 00:44:06 +0000 (0:00:00.369) 0:00:00.369 ******** 2026-03-28 00:44:15.566367 | orchestrator | ok: [testbed-manager] 2026-03-28 00:44:15.566385 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:44:15.566402 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:44:15.566419 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:44:15.566435 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:44:15.566452 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:44:15.566468 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:44:15.566485 | orchestrator | 2026-03-28 00:44:15.566502 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-28 00:44:15.566554 | orchestrator | Saturday 28 March 2026 00:44:07 +0000 (0:00:01.323) 0:00:01.692 ******** 2026-03-28 00:44:15.566573 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:44:15.566592 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:44:15.566610 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:44:15.566627 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:44:15.566645 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:44:15.566662 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:44:15.566679 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:44:15.566695 | orchestrator | 2026-03-28 00:44:15.566711 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-28 00:44:15.566727 | orchestrator | 2026-03-28 00:44:15.566743 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-28 00:44:15.566760 | orchestrator | Saturday 28 March 2026 00:44:08 +0000 (0:00:01.201) 0:00:02.894 ******** 2026-03-28 00:44:15.566777 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:44:15.566793 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:44:15.566808 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:44:15.566823 | orchestrator | ok: [testbed-manager] 2026-03-28 00:44:15.566839 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:44:15.566853 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:44:15.566870 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:44:15.566886 | orchestrator | 2026-03-28 00:44:15.566903 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-28 00:44:15.566920 | orchestrator | 2026-03-28 00:44:15.566935 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-28 00:44:15.566951 | orchestrator | Saturday 28 March 2026 00:44:14 +0000 (0:00:05.806) 0:00:08.700 ******** 2026-03-28 00:44:15.566967 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:44:15.566982 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:44:15.567033 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:44:15.567050 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:44:15.567065 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:44:15.567079 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:44:15.567094 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:44:15.567109 | orchestrator | 2026-03-28 00:44:15.567124 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:44:15.567139 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:44:15.567157 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:44:15.567171 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:44:15.567186 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:44:15.567201 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:44:15.567216 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:44:15.567232 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 00:44:15.567247 | orchestrator | 2026-03-28 00:44:15.567262 | orchestrator | 2026-03-28 00:44:15.567278 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:44:15.567294 | orchestrator | Saturday 28 March 2026 00:44:15 +0000 (0:00:00.515) 0:00:09.215 ******** 2026-03-28 00:44:15.567308 | orchestrator | =============================================================================== 2026-03-28 00:44:15.567323 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.81s 2026-03-28 00:44:15.567338 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.32s 2026-03-28 00:44:15.567370 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.20s 2026-03-28 00:44:15.567384 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.52s 2026-03-28 00:44:27.207126 | orchestrator | 2026-03-28 00:44:27 | INFO  | Prepare task for execution of frr. 2026-03-28 00:44:27.290822 | orchestrator | 2026-03-28 00:44:27 | INFO  | Task a4dcba59-b1b0-4fa2-8338-9a56e3b63d05 (frr) was prepared for execution. 2026-03-28 00:44:27.291079 | orchestrator | 2026-03-28 00:44:27 | INFO  | It takes a moment until task a4dcba59-b1b0-4fa2-8338-9a56e3b63d05 (frr) has been started and output is visible here. 2026-03-28 00:44:53.207994 | orchestrator | 2026-03-28 00:44:53.208135 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-03-28 00:44:53.208164 | orchestrator | 2026-03-28 00:44:53.208179 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-03-28 00:44:53.208191 | orchestrator | Saturday 28 March 2026 00:44:30 +0000 (0:00:00.343) 0:00:00.343 ******** 2026-03-28 00:44:53.208203 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-03-28 00:44:53.208215 | orchestrator | 2026-03-28 00:44:53.208226 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-03-28 00:44:53.208237 | orchestrator | Saturday 28 March 2026 00:44:31 +0000 (0:00:00.231) 0:00:00.574 ******** 2026-03-28 00:44:53.208248 | orchestrator | changed: [testbed-manager] 2026-03-28 00:44:53.208260 | orchestrator | 2026-03-28 00:44:53.208271 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-03-28 00:44:53.208310 | orchestrator | Saturday 28 March 2026 00:44:32 +0000 (0:00:01.584) 0:00:02.159 ******** 2026-03-28 00:44:53.208321 | orchestrator | changed: [testbed-manager] 2026-03-28 00:44:53.208332 | orchestrator | 2026-03-28 00:44:53.208343 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-03-28 00:44:53.208354 | orchestrator | Saturday 28 March 2026 00:44:42 +0000 (0:00:09.643) 0:00:11.802 ******** 2026-03-28 00:44:53.208365 | orchestrator | ok: [testbed-manager] 2026-03-28 00:44:53.208377 | orchestrator | 2026-03-28 00:44:53.208388 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-03-28 00:44:53.208399 | orchestrator | Saturday 28 March 2026 00:44:43 +0000 (0:00:01.034) 0:00:12.837 ******** 2026-03-28 00:44:53.208410 | orchestrator | changed: [testbed-manager] 2026-03-28 00:44:53.208420 | orchestrator | 2026-03-28 00:44:53.208431 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-03-28 00:44:53.208442 | orchestrator | Saturday 28 March 2026 00:44:44 +0000 (0:00:01.064) 0:00:13.901 ******** 2026-03-28 00:44:53.208452 | orchestrator | ok: [testbed-manager] 2026-03-28 00:44:53.208463 | orchestrator | 2026-03-28 00:44:53.208474 | orchestrator | TASK [osism.services.frr : Write frr_config_template to temporary file] ******** 2026-03-28 00:44:53.208484 | orchestrator | Saturday 28 March 2026 00:44:45 +0000 (0:00:01.203) 0:00:15.105 ******** 2026-03-28 00:44:53.208547 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:44:53.208563 | orchestrator | 2026-03-28 00:44:53.208576 | orchestrator | TASK [osism.services.frr : Render frr.conf from frr_config_template variable] *** 2026-03-28 00:44:53.208589 | orchestrator | Saturday 28 March 2026 00:44:45 +0000 (0:00:00.163) 0:00:15.269 ******** 2026-03-28 00:44:53.208601 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:44:53.208614 | orchestrator | 2026-03-28 00:44:53.208626 | orchestrator | TASK [osism.services.frr : Remove temporary frr_config_template file] ********** 2026-03-28 00:44:53.208639 | orchestrator | Saturday 28 March 2026 00:44:46 +0000 (0:00:00.303) 0:00:15.572 ******** 2026-03-28 00:44:53.208651 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:44:53.208664 | orchestrator | 2026-03-28 00:44:53.208676 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-03-28 00:44:53.208690 | orchestrator | Saturday 28 March 2026 00:44:46 +0000 (0:00:00.158) 0:00:15.731 ******** 2026-03-28 00:44:53.208702 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:44:53.208714 | orchestrator | 2026-03-28 00:44:53.208726 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-03-28 00:44:53.208739 | orchestrator | Saturday 28 March 2026 00:44:46 +0000 (0:00:00.162) 0:00:15.893 ******** 2026-03-28 00:44:53.208751 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:44:53.208764 | orchestrator | 2026-03-28 00:44:53.208776 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-03-28 00:44:53.208788 | orchestrator | Saturday 28 March 2026 00:44:46 +0000 (0:00:00.146) 0:00:16.039 ******** 2026-03-28 00:44:53.208801 | orchestrator | changed: [testbed-manager] 2026-03-28 00:44:53.208813 | orchestrator | 2026-03-28 00:44:53.208825 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-03-28 00:44:53.208838 | orchestrator | Saturday 28 March 2026 00:44:47 +0000 (0:00:01.090) 0:00:17.129 ******** 2026-03-28 00:44:53.208850 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-03-28 00:44:53.208862 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-03-28 00:44:53.208876 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-03-28 00:44:53.208889 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-03-28 00:44:53.208902 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-03-28 00:44:53.208916 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-03-28 00:44:53.208936 | orchestrator | 2026-03-28 00:44:53.208947 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-03-28 00:44:53.208958 | orchestrator | Saturday 28 March 2026 00:44:50 +0000 (0:00:02.471) 0:00:19.601 ******** 2026-03-28 00:44:53.208969 | orchestrator | ok: [testbed-manager] 2026-03-28 00:44:53.208980 | orchestrator | 2026-03-28 00:44:53.208990 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-03-28 00:44:53.209001 | orchestrator | Saturday 28 March 2026 00:44:51 +0000 (0:00:01.333) 0:00:20.935 ******** 2026-03-28 00:44:53.209012 | orchestrator | changed: [testbed-manager] 2026-03-28 00:44:53.209023 | orchestrator | 2026-03-28 00:44:53.209034 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:44:53.209045 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-28 00:44:53.209056 | orchestrator | 2026-03-28 00:44:53.209075 | orchestrator | 2026-03-28 00:44:53.209119 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:44:53.209141 | orchestrator | Saturday 28 March 2026 00:44:52 +0000 (0:00:01.448) 0:00:22.383 ******** 2026-03-28 00:44:53.209161 | orchestrator | =============================================================================== 2026-03-28 00:44:53.209173 | orchestrator | osism.services.frr : Install frr package -------------------------------- 9.64s 2026-03-28 00:44:53.209202 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.47s 2026-03-28 00:44:53.209214 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.58s 2026-03-28 00:44:53.209225 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.45s 2026-03-28 00:44:53.209236 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.33s 2026-03-28 00:44:53.209247 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.20s 2026-03-28 00:44:53.209257 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.09s 2026-03-28 00:44:53.209268 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 1.07s 2026-03-28 00:44:53.209279 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.03s 2026-03-28 00:44:53.209289 | orchestrator | osism.services.frr : Render frr.conf from frr_config_template variable --- 0.30s 2026-03-28 00:44:53.209300 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.23s 2026-03-28 00:44:53.209311 | orchestrator | osism.services.frr : Write frr_config_template to temporary file -------- 0.16s 2026-03-28 00:44:53.209321 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.16s 2026-03-28 00:44:53.209332 | orchestrator | osism.services.frr : Remove temporary frr_config_template file ---------- 0.16s 2026-03-28 00:44:53.209343 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.15s 2026-03-28 00:44:53.469052 | orchestrator | 2026-03-28 00:44:53.473276 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Sat Mar 28 00:44:53 UTC 2026 2026-03-28 00:44:53.473347 | orchestrator | 2026-03-28 00:44:54.666886 | orchestrator | 2026-03-28 00:44:54 | INFO  | Collection nutshell is prepared for execution 2026-03-28 00:44:54.785790 | orchestrator | 2026-03-28 00:44:54 | INFO  | A [0] - dotfiles 2026-03-28 00:45:04.825012 | orchestrator | 2026-03-28 00:45:04 | INFO  | A [0] - homer 2026-03-28 00:45:04.825099 | orchestrator | 2026-03-28 00:45:04 | INFO  | A [0] - netdata 2026-03-28 00:45:04.825106 | orchestrator | 2026-03-28 00:45:04 | INFO  | A [0] - openstackclient 2026-03-28 00:45:04.825111 | orchestrator | 2026-03-28 00:45:04 | INFO  | A [0] - phpmyadmin 2026-03-28 00:45:04.825116 | orchestrator | 2026-03-28 00:45:04 | INFO  | A [0] - common 2026-03-28 00:45:04.829870 | orchestrator | 2026-03-28 00:45:04 | INFO  | A [1] -- loadbalancer 2026-03-28 00:45:04.829913 | orchestrator | 2026-03-28 00:45:04 | INFO  | A [2] --- opensearch 2026-03-28 00:45:04.829943 | orchestrator | 2026-03-28 00:45:04 | INFO  | A [2] --- mariadb-ng 2026-03-28 00:45:04.829949 | orchestrator | 2026-03-28 00:45:04 | INFO  | A [3] ---- horizon 2026-03-28 00:45:04.830148 | orchestrator | 2026-03-28 00:45:04 | INFO  | A [3] ---- keystone 2026-03-28 00:45:04.830663 | orchestrator | 2026-03-28 00:45:04 | INFO  | A [4] ----- neutron 2026-03-28 00:45:04.831266 | orchestrator | 2026-03-28 00:45:04 | INFO  | A [5] ------ wait-for-nova 2026-03-28 00:45:04.831348 | orchestrator | 2026-03-28 00:45:04 | INFO  | A [6] ------- octavia 2026-03-28 00:45:04.832756 | orchestrator | 2026-03-28 00:45:04 | INFO  | A [4] ----- barbican 2026-03-28 00:45:04.832803 | orchestrator | 2026-03-28 00:45:04 | INFO  | A [4] ----- designate 2026-03-28 00:45:04.832815 | orchestrator | 2026-03-28 00:45:04 | INFO  | A [4] ----- ironic 2026-03-28 00:45:04.832956 | orchestrator | 2026-03-28 00:45:04 | INFO  | A [4] ----- placement 2026-03-28 00:45:04.832975 | orchestrator | 2026-03-28 00:45:04 | INFO  | A [4] ----- magnum 2026-03-28 00:45:04.834871 | orchestrator | 2026-03-28 00:45:04 | INFO  | A [1] -- openvswitch 2026-03-28 00:45:04.835393 | orchestrator | 2026-03-28 00:45:04 | INFO  | A [2] --- ovn 2026-03-28 00:45:04.835425 | orchestrator | 2026-03-28 00:45:04 | INFO  | A [1] -- memcached 2026-03-28 00:45:04.835436 | orchestrator | 2026-03-28 00:45:04 | INFO  | A [1] -- redis 2026-03-28 00:45:04.835452 | orchestrator | 2026-03-28 00:45:04 | INFO  | A [1] -- rabbitmq-ng 2026-03-28 00:45:04.835902 | orchestrator | 2026-03-28 00:45:04 | INFO  | A [0] - kubernetes 2026-03-28 00:45:04.838469 | orchestrator | 2026-03-28 00:45:04 | INFO  | A [1] -- kubeconfig 2026-03-28 00:45:04.838532 | orchestrator | 2026-03-28 00:45:04 | INFO  | A [1] -- copy-kubeconfig 2026-03-28 00:45:04.839005 | orchestrator | 2026-03-28 00:45:04 | INFO  | A [0] - ceph 2026-03-28 00:45:04.841083 | orchestrator | 2026-03-28 00:45:04 | INFO  | A [1] -- ceph-pools 2026-03-28 00:45:04.841129 | orchestrator | 2026-03-28 00:45:04 | INFO  | A [2] --- copy-ceph-keys 2026-03-28 00:45:04.841150 | orchestrator | 2026-03-28 00:45:04 | INFO  | A [3] ---- cephclient 2026-03-28 00:45:04.841292 | orchestrator | 2026-03-28 00:45:04 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-03-28 00:45:04.841325 | orchestrator | 2026-03-28 00:45:04 | INFO  | A [4] ----- wait-for-keystone 2026-03-28 00:45:04.841687 | orchestrator | 2026-03-28 00:45:04 | INFO  | A [5] ------ kolla-ceph-rgw 2026-03-28 00:45:04.841715 | orchestrator | 2026-03-28 00:45:04 | INFO  | A [5] ------ glance 2026-03-28 00:45:04.841727 | orchestrator | 2026-03-28 00:45:04 | INFO  | A [5] ------ cinder 2026-03-28 00:45:04.841955 | orchestrator | 2026-03-28 00:45:04 | INFO  | A [5] ------ nova 2026-03-28 00:45:04.842289 | orchestrator | 2026-03-28 00:45:04 | INFO  | A [4] ----- prometheus 2026-03-28 00:45:04.842311 | orchestrator | 2026-03-28 00:45:04 | INFO  | A [5] ------ grafana 2026-03-28 00:45:05.069188 | orchestrator | 2026-03-28 00:45:05 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-03-28 00:45:05.069289 | orchestrator | 2026-03-28 00:45:05 | INFO  | Tasks are running in the background 2026-03-28 00:45:07.004352 | orchestrator | 2026-03-28 00:45:07 | INFO  | No task IDs specified, wait for all currently running tasks 2026-03-28 00:45:09.242788 | orchestrator | 2026-03-28 00:45:09 | INFO  | Task d36d8fbd-4b9a-476e-87df-ccade075a0ab is in state STARTED 2026-03-28 00:45:09.243645 | orchestrator | 2026-03-28 00:45:09 | INFO  | Task cbdebb9c-f003-4617-aa78-3271febeca3f is in state STARTED 2026-03-28 00:45:09.245786 | orchestrator | 2026-03-28 00:45:09 | INFO  | Task c6e3cb40-0a5d-4a31-83b2-d29665f623dc is in state STARTED 2026-03-28 00:45:09.246483 | orchestrator | 2026-03-28 00:45:09 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:45:09.248811 | orchestrator | 2026-03-28 00:45:09 | INFO  | Task 1cde8362-f599-48d7-baea-449f749438e7 is in state STARTED 2026-03-28 00:45:09.249558 | orchestrator | 2026-03-28 00:45:09 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:45:09.252408 | orchestrator | 2026-03-28 00:45:09 | INFO  | Task 01832221-5fb5-4d54-a43f-959033b0ad25 is in state STARTED 2026-03-28 00:45:09.252445 | orchestrator | 2026-03-28 00:45:09 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:45:12.317414 | orchestrator | 2026-03-28 00:45:12 | INFO  | Task d36d8fbd-4b9a-476e-87df-ccade075a0ab is in state STARTED 2026-03-28 00:45:12.321641 | orchestrator | 2026-03-28 00:45:12 | INFO  | Task cbdebb9c-f003-4617-aa78-3271febeca3f is in state STARTED 2026-03-28 00:45:12.322652 | orchestrator | 2026-03-28 00:45:12 | INFO  | Task c6e3cb40-0a5d-4a31-83b2-d29665f623dc is in state STARTED 2026-03-28 00:45:12.323982 | orchestrator | 2026-03-28 00:45:12 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:45:12.329741 | orchestrator | 2026-03-28 00:45:12 | INFO  | Task 1cde8362-f599-48d7-baea-449f749438e7 is in state STARTED 2026-03-28 00:45:12.330563 | orchestrator | 2026-03-28 00:45:12 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:45:12.334915 | orchestrator | 2026-03-28 00:45:12 | INFO  | Task 01832221-5fb5-4d54-a43f-959033b0ad25 is in state STARTED 2026-03-28 00:45:12.334948 | orchestrator | 2026-03-28 00:45:12 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:45:15.395085 | orchestrator | 2026-03-28 00:45:15 | INFO  | Task d36d8fbd-4b9a-476e-87df-ccade075a0ab is in state STARTED 2026-03-28 00:45:15.397019 | orchestrator | 2026-03-28 00:45:15 | INFO  | Task cbdebb9c-f003-4617-aa78-3271febeca3f is in state STARTED 2026-03-28 00:45:15.401232 | orchestrator | 2026-03-28 00:45:15 | INFO  | Task c6e3cb40-0a5d-4a31-83b2-d29665f623dc is in state STARTED 2026-03-28 00:45:15.403274 | orchestrator | 2026-03-28 00:45:15 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:45:15.406885 | orchestrator | 2026-03-28 00:45:15 | INFO  | Task 1cde8362-f599-48d7-baea-449f749438e7 is in state STARTED 2026-03-28 00:45:15.407915 | orchestrator | 2026-03-28 00:45:15 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:45:15.408881 | orchestrator | 2026-03-28 00:45:15 | INFO  | Task 01832221-5fb5-4d54-a43f-959033b0ad25 is in state STARTED 2026-03-28 00:45:15.409112 | orchestrator | 2026-03-28 00:45:15 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:45:18.470237 | orchestrator | 2026-03-28 00:45:18 | INFO  | Task d36d8fbd-4b9a-476e-87df-ccade075a0ab is in state STARTED 2026-03-28 00:45:18.473597 | orchestrator | 2026-03-28 00:45:18 | INFO  | Task cbdebb9c-f003-4617-aa78-3271febeca3f is in state STARTED 2026-03-28 00:45:18.474565 | orchestrator | 2026-03-28 00:45:18 | INFO  | Task c6e3cb40-0a5d-4a31-83b2-d29665f623dc is in state STARTED 2026-03-28 00:45:18.475571 | orchestrator | 2026-03-28 00:45:18 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:45:18.476341 | orchestrator | 2026-03-28 00:45:18 | INFO  | Task 1cde8362-f599-48d7-baea-449f749438e7 is in state STARTED 2026-03-28 00:45:18.478607 | orchestrator | 2026-03-28 00:45:18 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:45:18.488608 | orchestrator | 2026-03-28 00:45:18 | INFO  | Task 01832221-5fb5-4d54-a43f-959033b0ad25 is in state STARTED 2026-03-28 00:45:18.488691 | orchestrator | 2026-03-28 00:45:18 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:45:21.608182 | orchestrator | 2026-03-28 00:45:21 | INFO  | Task d36d8fbd-4b9a-476e-87df-ccade075a0ab is in state STARTED 2026-03-28 00:45:21.609800 | orchestrator | 2026-03-28 00:45:21 | INFO  | Task cbdebb9c-f003-4617-aa78-3271febeca3f is in state STARTED 2026-03-28 00:45:21.615848 | orchestrator | 2026-03-28 00:45:21 | INFO  | Task c6e3cb40-0a5d-4a31-83b2-d29665f623dc is in state STARTED 2026-03-28 00:45:21.642126 | orchestrator | 2026-03-28 00:45:21 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:45:21.642217 | orchestrator | 2026-03-28 00:45:21 | INFO  | Task 1cde8362-f599-48d7-baea-449f749438e7 is in state STARTED 2026-03-28 00:45:21.642234 | orchestrator | 2026-03-28 00:45:21 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:45:21.642246 | orchestrator | 2026-03-28 00:45:21 | INFO  | Task 01832221-5fb5-4d54-a43f-959033b0ad25 is in state STARTED 2026-03-28 00:45:21.642260 | orchestrator | 2026-03-28 00:45:21 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:45:24.820403 | orchestrator | 2026-03-28 00:45:24 | INFO  | Task d36d8fbd-4b9a-476e-87df-ccade075a0ab is in state STARTED 2026-03-28 00:45:24.820573 | orchestrator | 2026-03-28 00:45:24 | INFO  | Task cbdebb9c-f003-4617-aa78-3271febeca3f is in state STARTED 2026-03-28 00:45:24.820590 | orchestrator | 2026-03-28 00:45:24 | INFO  | Task c6e3cb40-0a5d-4a31-83b2-d29665f623dc is in state STARTED 2026-03-28 00:45:24.820602 | orchestrator | 2026-03-28 00:45:24 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:45:24.820613 | orchestrator | 2026-03-28 00:45:24 | INFO  | Task 1cde8362-f599-48d7-baea-449f749438e7 is in state STARTED 2026-03-28 00:45:24.820624 | orchestrator | 2026-03-28 00:45:24 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:45:24.820635 | orchestrator | 2026-03-28 00:45:24 | INFO  | Task 01832221-5fb5-4d54-a43f-959033b0ad25 is in state STARTED 2026-03-28 00:45:24.820646 | orchestrator | 2026-03-28 00:45:24 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:45:28.007445 | orchestrator | 2026-03-28 00:45:28 | INFO  | Task d36d8fbd-4b9a-476e-87df-ccade075a0ab is in state STARTED 2026-03-28 00:45:28.009825 | orchestrator | 2026-03-28 00:45:28 | INFO  | Task cbdebb9c-f003-4617-aa78-3271febeca3f is in state STARTED 2026-03-28 00:45:28.012866 | orchestrator | 2026-03-28 00:45:28 | INFO  | Task c6e3cb40-0a5d-4a31-83b2-d29665f623dc is in state STARTED 2026-03-28 00:45:28.013638 | orchestrator | 2026-03-28 00:45:28 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:45:28.014873 | orchestrator | 2026-03-28 00:45:28 | INFO  | Task 1cde8362-f599-48d7-baea-449f749438e7 is in state STARTED 2026-03-28 00:45:28.019273 | orchestrator | 2026-03-28 00:45:28 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:45:28.026242 | orchestrator | 2026-03-28 00:45:28 | INFO  | Task 01832221-5fb5-4d54-a43f-959033b0ad25 is in state STARTED 2026-03-28 00:45:28.026307 | orchestrator | 2026-03-28 00:45:28 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:45:31.134371 | orchestrator | 2026-03-28 00:45:31 | INFO  | Task d36d8fbd-4b9a-476e-87df-ccade075a0ab is in state STARTED 2026-03-28 00:45:31.134545 | orchestrator | 2026-03-28 00:45:31 | INFO  | Task cbdebb9c-f003-4617-aa78-3271febeca3f is in state STARTED 2026-03-28 00:45:31.136683 | orchestrator | 2026-03-28 00:45:31 | INFO  | Task c6e3cb40-0a5d-4a31-83b2-d29665f623dc is in state STARTED 2026-03-28 00:45:31.138439 | orchestrator | 2026-03-28 00:45:31 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:45:31.140740 | orchestrator | 2026-03-28 00:45:31 | INFO  | Task 1cde8362-f599-48d7-baea-449f749438e7 is in state STARTED 2026-03-28 00:45:31.143002 | orchestrator | 2026-03-28 00:45:31 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:45:31.149591 | orchestrator | 2026-03-28 00:45:31 | INFO  | Task 01832221-5fb5-4d54-a43f-959033b0ad25 is in state STARTED 2026-03-28 00:45:31.150204 | orchestrator | 2026-03-28 00:45:31 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:45:34.346893 | orchestrator | 2026-03-28 00:45:34 | INFO  | Task d36d8fbd-4b9a-476e-87df-ccade075a0ab is in state STARTED 2026-03-28 00:45:34.349188 | orchestrator | 2026-03-28 00:45:34 | INFO  | Task cbdebb9c-f003-4617-aa78-3271febeca3f is in state STARTED 2026-03-28 00:45:34.350697 | orchestrator | 2026-03-28 00:45:34 | INFO  | Task c6e3cb40-0a5d-4a31-83b2-d29665f623dc is in state STARTED 2026-03-28 00:45:34.355084 | orchestrator | 2026-03-28 00:45:34 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:45:34.357982 | orchestrator | 2026-03-28 00:45:34 | INFO  | Task 1cde8362-f599-48d7-baea-449f749438e7 is in state STARTED 2026-03-28 00:45:34.360085 | orchestrator | 2026-03-28 00:45:34 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:45:34.364181 | orchestrator | 2026-03-28 00:45:34 | INFO  | Task 01832221-5fb5-4d54-a43f-959033b0ad25 is in state STARTED 2026-03-28 00:45:34.364268 | orchestrator | 2026-03-28 00:45:34 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:45:37.566151 | orchestrator | 2026-03-28 00:45:37 | INFO  | Task d36d8fbd-4b9a-476e-87df-ccade075a0ab is in state STARTED 2026-03-28 00:45:37.567669 | orchestrator | 2026-03-28 00:45:37 | INFO  | Task cbdebb9c-f003-4617-aa78-3271febeca3f is in state STARTED 2026-03-28 00:45:37.571862 | orchestrator | 2026-03-28 00:45:37 | INFO  | Task c6e3cb40-0a5d-4a31-83b2-d29665f623dc is in state STARTED 2026-03-28 00:45:37.573231 | orchestrator | 2026-03-28 00:45:37 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:45:37.579438 | orchestrator | 2026-03-28 00:45:37 | INFO  | Task 1cde8362-f599-48d7-baea-449f749438e7 is in state STARTED 2026-03-28 00:45:37.582141 | orchestrator | 2026-03-28 00:45:37 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:45:37.585551 | orchestrator | 2026-03-28 00:45:37 | INFO  | Task 01832221-5fb5-4d54-a43f-959033b0ad25 is in state STARTED 2026-03-28 00:45:37.585615 | orchestrator | 2026-03-28 00:45:37 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:45:40.673878 | orchestrator | 2026-03-28 00:45:40.673991 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-03-28 00:45:40.674008 | orchestrator | 2026-03-28 00:45:40.674078 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-03-28 00:45:40.674090 | orchestrator | Saturday 28 March 2026 00:45:18 +0000 (0:00:01.292) 0:00:01.292 ******** 2026-03-28 00:45:40.674101 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:45:40.674113 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:45:40.674123 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:45:40.674134 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:45:40.674143 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:45:40.674153 | orchestrator | changed: [testbed-manager] 2026-03-28 00:45:40.674163 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:45:40.674180 | orchestrator | 2026-03-28 00:45:40.674232 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-03-28 00:45:40.674249 | orchestrator | Saturday 28 March 2026 00:45:25 +0000 (0:00:06.832) 0:00:08.124 ******** 2026-03-28 00:45:40.674264 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-03-28 00:45:40.674282 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-03-28 00:45:40.674299 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-03-28 00:45:40.674315 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-03-28 00:45:40.674330 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-03-28 00:45:40.674345 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-03-28 00:45:40.674361 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-03-28 00:45:40.674378 | orchestrator | 2026-03-28 00:45:40.674396 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-03-28 00:45:40.674413 | orchestrator | Saturday 28 March 2026 00:45:28 +0000 (0:00:03.822) 0:00:11.947 ******** 2026-03-28 00:45:40.674473 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-28 00:45:28.706124', 'end': '2026-03-28 00:45:28.712559', 'delta': '0:00:00.006435', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-28 00:45:40.674497 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-28 00:45:26.846655', 'end': '2026-03-28 00:45:26.856847', 'delta': '0:00:00.010192', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-28 00:45:40.674514 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-28 00:45:27.826281', 'end': '2026-03-28 00:45:27.835040', 'delta': '0:00:00.008759', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-28 00:45:40.674570 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-28 00:45:27.001816', 'end': '2026-03-28 00:45:27.012149', 'delta': '0:00:00.010333', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-28 00:45:40.674607 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-28 00:45:26.982534', 'end': '2026-03-28 00:45:26.992014', 'delta': '0:00:00.009480', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-28 00:45:40.674627 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-28 00:45:26.818881', 'end': '2026-03-28 00:45:26.826025', 'delta': '0:00:00.007144', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-28 00:45:40.675023 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-28 00:45:26.860181', 'end': '2026-03-28 00:45:26.866344', 'delta': '0:00:00.006163', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-28 00:45:40.675049 | orchestrator | 2026-03-28 00:45:40.675067 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-03-28 00:45:40.675085 | orchestrator | Saturday 28 March 2026 00:45:30 +0000 (0:00:02.001) 0:00:13.948 ******** 2026-03-28 00:45:40.675102 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-03-28 00:45:40.675120 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-03-28 00:45:40.675137 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-03-28 00:45:40.675155 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-03-28 00:45:40.675173 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-03-28 00:45:40.675190 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-03-28 00:45:40.675206 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-03-28 00:45:40.675223 | orchestrator | 2026-03-28 00:45:40.675240 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-03-28 00:45:40.675257 | orchestrator | Saturday 28 March 2026 00:45:34 +0000 (0:00:03.722) 0:00:17.671 ******** 2026-03-28 00:45:40.675274 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-03-28 00:45:40.675309 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-03-28 00:45:40.675325 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-03-28 00:45:40.675341 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-03-28 00:45:40.675358 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-03-28 00:45:40.675375 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-03-28 00:45:40.675392 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-03-28 00:45:40.675409 | orchestrator | 2026-03-28 00:45:40.675426 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:45:40.675493 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:45:40.675515 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:45:40.675530 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:45:40.675546 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:45:40.675573 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:45:40.675589 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:45:40.675606 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:45:40.675622 | orchestrator | 2026-03-28 00:45:40.675638 | orchestrator | 2026-03-28 00:45:40.675654 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:45:40.675670 | orchestrator | Saturday 28 March 2026 00:45:38 +0000 (0:00:04.239) 0:00:21.910 ******** 2026-03-28 00:45:40.675687 | orchestrator | =============================================================================== 2026-03-28 00:45:40.675703 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 6.83s 2026-03-28 00:45:40.675718 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 4.24s 2026-03-28 00:45:40.675733 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 3.82s 2026-03-28 00:45:40.675749 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 3.72s 2026-03-28 00:45:40.675765 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.00s 2026-03-28 00:45:40.675782 | orchestrator | 2026-03-28 00:45:40 | INFO  | Task d36d8fbd-4b9a-476e-87df-ccade075a0ab is in state SUCCESS 2026-03-28 00:45:40.680309 | orchestrator | 2026-03-28 00:45:40 | INFO  | Task cbdebb9c-f003-4617-aa78-3271febeca3f is in state STARTED 2026-03-28 00:45:40.687251 | orchestrator | 2026-03-28 00:45:40 | INFO  | Task c6e3cb40-0a5d-4a31-83b2-d29665f623dc is in state STARTED 2026-03-28 00:45:40.691075 | orchestrator | 2026-03-28 00:45:40 | INFO  | Task 6e7b30ad-1871-4c93-88a2-ea5f3f14fb46 is in state STARTED 2026-03-28 00:45:40.698825 | orchestrator | 2026-03-28 00:45:40 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:45:40.709261 | orchestrator | 2026-03-28 00:45:40 | INFO  | Task 1cde8362-f599-48d7-baea-449f749438e7 is in state STARTED 2026-03-28 00:45:40.722492 | orchestrator | 2026-03-28 00:45:40 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:45:40.738519 | orchestrator | 2026-03-28 00:45:40 | INFO  | Task 01832221-5fb5-4d54-a43f-959033b0ad25 is in state STARTED 2026-03-28 00:45:40.740106 | orchestrator | 2026-03-28 00:45:40 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:45:43.816350 | orchestrator | 2026-03-28 00:45:43 | INFO  | Task cbdebb9c-f003-4617-aa78-3271febeca3f is in state STARTED 2026-03-28 00:45:43.816948 | orchestrator | 2026-03-28 00:45:43 | INFO  | Task c6e3cb40-0a5d-4a31-83b2-d29665f623dc is in state STARTED 2026-03-28 00:45:43.821018 | orchestrator | 2026-03-28 00:45:43 | INFO  | Task 6e7b30ad-1871-4c93-88a2-ea5f3f14fb46 is in state STARTED 2026-03-28 00:45:43.823310 | orchestrator | 2026-03-28 00:45:43 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:45:43.824296 | orchestrator | 2026-03-28 00:45:43 | INFO  | Task 1cde8362-f599-48d7-baea-449f749438e7 is in state STARTED 2026-03-28 00:45:43.825084 | orchestrator | 2026-03-28 00:45:43 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:45:43.826139 | orchestrator | 2026-03-28 00:45:43 | INFO  | Task 01832221-5fb5-4d54-a43f-959033b0ad25 is in state STARTED 2026-03-28 00:45:43.826168 | orchestrator | 2026-03-28 00:45:43 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:45:46.963323 | orchestrator | 2026-03-28 00:45:46 | INFO  | Task cbdebb9c-f003-4617-aa78-3271febeca3f is in state STARTED 2026-03-28 00:45:46.965090 | orchestrator | 2026-03-28 00:45:46 | INFO  | Task c6e3cb40-0a5d-4a31-83b2-d29665f623dc is in state STARTED 2026-03-28 00:45:46.965128 | orchestrator | 2026-03-28 00:45:46 | INFO  | Task 6e7b30ad-1871-4c93-88a2-ea5f3f14fb46 is in state STARTED 2026-03-28 00:45:46.968075 | orchestrator | 2026-03-28 00:45:46 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:45:46.968966 | orchestrator | 2026-03-28 00:45:46 | INFO  | Task 1cde8362-f599-48d7-baea-449f749438e7 is in state STARTED 2026-03-28 00:45:46.974197 | orchestrator | 2026-03-28 00:45:46 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:45:46.974291 | orchestrator | 2026-03-28 00:45:46 | INFO  | Task 01832221-5fb5-4d54-a43f-959033b0ad25 is in state STARTED 2026-03-28 00:45:46.974309 | orchestrator | 2026-03-28 00:45:46 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:45:50.114486 | orchestrator | 2026-03-28 00:45:50 | INFO  | Task cbdebb9c-f003-4617-aa78-3271febeca3f is in state STARTED 2026-03-28 00:45:50.114591 | orchestrator | 2026-03-28 00:45:50 | INFO  | Task c6e3cb40-0a5d-4a31-83b2-d29665f623dc is in state STARTED 2026-03-28 00:45:50.114607 | orchestrator | 2026-03-28 00:45:50 | INFO  | Task 6e7b30ad-1871-4c93-88a2-ea5f3f14fb46 is in state STARTED 2026-03-28 00:45:50.114621 | orchestrator | 2026-03-28 00:45:50 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:45:50.114640 | orchestrator | 2026-03-28 00:45:50 | INFO  | Task 1cde8362-f599-48d7-baea-449f749438e7 is in state STARTED 2026-03-28 00:45:50.114661 | orchestrator | 2026-03-28 00:45:50 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:45:50.114683 | orchestrator | 2026-03-28 00:45:50 | INFO  | Task 01832221-5fb5-4d54-a43f-959033b0ad25 is in state STARTED 2026-03-28 00:45:50.114705 | orchestrator | 2026-03-28 00:45:50 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:45:53.201358 | orchestrator | 2026-03-28 00:45:53 | INFO  | Task cbdebb9c-f003-4617-aa78-3271febeca3f is in state STARTED 2026-03-28 00:45:53.201531 | orchestrator | 2026-03-28 00:45:53 | INFO  | Task c6e3cb40-0a5d-4a31-83b2-d29665f623dc is in state STARTED 2026-03-28 00:45:53.201556 | orchestrator | 2026-03-28 00:45:53 | INFO  | Task 6e7b30ad-1871-4c93-88a2-ea5f3f14fb46 is in state STARTED 2026-03-28 00:45:53.201575 | orchestrator | 2026-03-28 00:45:53 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:45:53.201635 | orchestrator | 2026-03-28 00:45:53 | INFO  | Task 1cde8362-f599-48d7-baea-449f749438e7 is in state STARTED 2026-03-28 00:45:53.201657 | orchestrator | 2026-03-28 00:45:53 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:45:53.201676 | orchestrator | 2026-03-28 00:45:53 | INFO  | Task 01832221-5fb5-4d54-a43f-959033b0ad25 is in state STARTED 2026-03-28 00:45:53.201694 | orchestrator | 2026-03-28 00:45:53 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:45:56.250185 | orchestrator | 2026-03-28 00:45:56 | INFO  | Task cbdebb9c-f003-4617-aa78-3271febeca3f is in state STARTED 2026-03-28 00:45:56.250267 | orchestrator | 2026-03-28 00:45:56 | INFO  | Task c6e3cb40-0a5d-4a31-83b2-d29665f623dc is in state STARTED 2026-03-28 00:45:56.250276 | orchestrator | 2026-03-28 00:45:56 | INFO  | Task 6e7b30ad-1871-4c93-88a2-ea5f3f14fb46 is in state STARTED 2026-03-28 00:45:56.250283 | orchestrator | 2026-03-28 00:45:56 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:45:56.250289 | orchestrator | 2026-03-28 00:45:56 | INFO  | Task 1cde8362-f599-48d7-baea-449f749438e7 is in state STARTED 2026-03-28 00:45:56.250295 | orchestrator | 2026-03-28 00:45:56 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:45:56.250302 | orchestrator | 2026-03-28 00:45:56 | INFO  | Task 01832221-5fb5-4d54-a43f-959033b0ad25 is in state STARTED 2026-03-28 00:45:56.250308 | orchestrator | 2026-03-28 00:45:56 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:45:59.314369 | orchestrator | 2026-03-28 00:45:59 | INFO  | Task cbdebb9c-f003-4617-aa78-3271febeca3f is in state STARTED 2026-03-28 00:45:59.321373 | orchestrator | 2026-03-28 00:45:59 | INFO  | Task c6e3cb40-0a5d-4a31-83b2-d29665f623dc is in state STARTED 2026-03-28 00:45:59.321694 | orchestrator | 2026-03-28 00:45:59 | INFO  | Task 6e7b30ad-1871-4c93-88a2-ea5f3f14fb46 is in state STARTED 2026-03-28 00:45:59.329631 | orchestrator | 2026-03-28 00:45:59 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:45:59.336019 | orchestrator | 2026-03-28 00:45:59 | INFO  | Task 1cde8362-f599-48d7-baea-449f749438e7 is in state SUCCESS 2026-03-28 00:45:59.341143 | orchestrator | 2026-03-28 00:45:59 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:45:59.343619 | orchestrator | 2026-03-28 00:45:59 | INFO  | Task 01832221-5fb5-4d54-a43f-959033b0ad25 is in state STARTED 2026-03-28 00:45:59.343709 | orchestrator | 2026-03-28 00:45:59 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:46:02.404586 | orchestrator | 2026-03-28 00:46:02 | INFO  | Task cbdebb9c-f003-4617-aa78-3271febeca3f is in state STARTED 2026-03-28 00:46:02.404825 | orchestrator | 2026-03-28 00:46:02 | INFO  | Task c6e3cb40-0a5d-4a31-83b2-d29665f623dc is in state STARTED 2026-03-28 00:46:02.410955 | orchestrator | 2026-03-28 00:46:02 | INFO  | Task 6e7b30ad-1871-4c93-88a2-ea5f3f14fb46 is in state STARTED 2026-03-28 00:46:02.411084 | orchestrator | 2026-03-28 00:46:02 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:46:02.415693 | orchestrator | 2026-03-28 00:46:02 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:46:02.415747 | orchestrator | 2026-03-28 00:46:02 | INFO  | Task 01832221-5fb5-4d54-a43f-959033b0ad25 is in state STARTED 2026-03-28 00:46:02.415760 | orchestrator | 2026-03-28 00:46:02 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:46:05.528000 | orchestrator | 2026-03-28 00:46:05 | INFO  | Task cbdebb9c-f003-4617-aa78-3271febeca3f is in state STARTED 2026-03-28 00:46:05.528201 | orchestrator | 2026-03-28 00:46:05 | INFO  | Task c6e3cb40-0a5d-4a31-83b2-d29665f623dc is in state STARTED 2026-03-28 00:46:05.528670 | orchestrator | 2026-03-28 00:46:05 | INFO  | Task 6e7b30ad-1871-4c93-88a2-ea5f3f14fb46 is in state STARTED 2026-03-28 00:46:05.529154 | orchestrator | 2026-03-28 00:46:05 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:46:05.529345 | orchestrator | 2026-03-28 00:46:05 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:46:05.529823 | orchestrator | 2026-03-28 00:46:05 | INFO  | Task 01832221-5fb5-4d54-a43f-959033b0ad25 is in state STARTED 2026-03-28 00:46:05.529849 | orchestrator | 2026-03-28 00:46:05 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:46:08.592352 | orchestrator | 2026-03-28 00:46:08 | INFO  | Task cbdebb9c-f003-4617-aa78-3271febeca3f is in state STARTED 2026-03-28 00:46:08.597371 | orchestrator | 2026-03-28 00:46:08 | INFO  | Task c6e3cb40-0a5d-4a31-83b2-d29665f623dc is in state STARTED 2026-03-28 00:46:08.598382 | orchestrator | 2026-03-28 00:46:08 | INFO  | Task 6e7b30ad-1871-4c93-88a2-ea5f3f14fb46 is in state STARTED 2026-03-28 00:46:08.599270 | orchestrator | 2026-03-28 00:46:08 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:46:08.601155 | orchestrator | 2026-03-28 00:46:08 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:46:08.602202 | orchestrator | 2026-03-28 00:46:08 | INFO  | Task 01832221-5fb5-4d54-a43f-959033b0ad25 is in state STARTED 2026-03-28 00:46:08.602561 | orchestrator | 2026-03-28 00:46:08 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:46:11.665062 | orchestrator | 2026-03-28 00:46:11 | INFO  | Task cbdebb9c-f003-4617-aa78-3271febeca3f is in state STARTED 2026-03-28 00:46:11.665807 | orchestrator | 2026-03-28 00:46:11 | INFO  | Task c6e3cb40-0a5d-4a31-83b2-d29665f623dc is in state STARTED 2026-03-28 00:46:11.669016 | orchestrator | 2026-03-28 00:46:11 | INFO  | Task 6e7b30ad-1871-4c93-88a2-ea5f3f14fb46 is in state STARTED 2026-03-28 00:46:11.669234 | orchestrator | 2026-03-28 00:46:11 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:46:11.671655 | orchestrator | 2026-03-28 00:46:11 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:46:11.672080 | orchestrator | 2026-03-28 00:46:11 | INFO  | Task 01832221-5fb5-4d54-a43f-959033b0ad25 is in state STARTED 2026-03-28 00:46:11.672128 | orchestrator | 2026-03-28 00:46:11 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:46:14.718866 | orchestrator | 2026-03-28 00:46:14 | INFO  | Task cbdebb9c-f003-4617-aa78-3271febeca3f is in state STARTED 2026-03-28 00:46:14.720750 | orchestrator | 2026-03-28 00:46:14 | INFO  | Task c6e3cb40-0a5d-4a31-83b2-d29665f623dc is in state STARTED 2026-03-28 00:46:14.721891 | orchestrator | 2026-03-28 00:46:14 | INFO  | Task 6e7b30ad-1871-4c93-88a2-ea5f3f14fb46 is in state STARTED 2026-03-28 00:46:14.723962 | orchestrator | 2026-03-28 00:46:14 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:46:14.729536 | orchestrator | 2026-03-28 00:46:14 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:46:14.734722 | orchestrator | 2026-03-28 00:46:14 | INFO  | Task 01832221-5fb5-4d54-a43f-959033b0ad25 is in state SUCCESS 2026-03-28 00:46:14.734815 | orchestrator | 2026-03-28 00:46:14 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:46:17.788726 | orchestrator | 2026-03-28 00:46:17 | INFO  | Task cbdebb9c-f003-4617-aa78-3271febeca3f is in state STARTED 2026-03-28 00:46:17.794000 | orchestrator | 2026-03-28 00:46:17 | INFO  | Task c6e3cb40-0a5d-4a31-83b2-d29665f623dc is in state STARTED 2026-03-28 00:46:17.799884 | orchestrator | 2026-03-28 00:46:17 | INFO  | Task 6e7b30ad-1871-4c93-88a2-ea5f3f14fb46 is in state STARTED 2026-03-28 00:46:17.799951 | orchestrator | 2026-03-28 00:46:17 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:46:17.802769 | orchestrator | 2026-03-28 00:46:17 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:46:17.802793 | orchestrator | 2026-03-28 00:46:17 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:46:20.858308 | orchestrator | 2026-03-28 00:46:20 | INFO  | Task cbdebb9c-f003-4617-aa78-3271febeca3f is in state STARTED 2026-03-28 00:46:20.859434 | orchestrator | 2026-03-28 00:46:20 | INFO  | Task c6e3cb40-0a5d-4a31-83b2-d29665f623dc is in state STARTED 2026-03-28 00:46:20.862922 | orchestrator | 2026-03-28 00:46:20 | INFO  | Task 6e7b30ad-1871-4c93-88a2-ea5f3f14fb46 is in state STARTED 2026-03-28 00:46:20.865749 | orchestrator | 2026-03-28 00:46:20 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:46:20.869144 | orchestrator | 2026-03-28 00:46:20 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:46:20.869208 | orchestrator | 2026-03-28 00:46:20 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:46:23.944676 | orchestrator | 2026-03-28 00:46:23 | INFO  | Task cbdebb9c-f003-4617-aa78-3271febeca3f is in state STARTED 2026-03-28 00:46:23.946280 | orchestrator | 2026-03-28 00:46:23 | INFO  | Task c6e3cb40-0a5d-4a31-83b2-d29665f623dc is in state STARTED 2026-03-28 00:46:23.948737 | orchestrator | 2026-03-28 00:46:23 | INFO  | Task 6e7b30ad-1871-4c93-88a2-ea5f3f14fb46 is in state STARTED 2026-03-28 00:46:23.951535 | orchestrator | 2026-03-28 00:46:23 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:46:23.952717 | orchestrator | 2026-03-28 00:46:23 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:46:23.952784 | orchestrator | 2026-03-28 00:46:23 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:46:27.034557 | orchestrator | 2026-03-28 00:46:27 | INFO  | Task cbdebb9c-f003-4617-aa78-3271febeca3f is in state STARTED 2026-03-28 00:46:27.035911 | orchestrator | 2026-03-28 00:46:27 | INFO  | Task c6e3cb40-0a5d-4a31-83b2-d29665f623dc is in state STARTED 2026-03-28 00:46:27.038986 | orchestrator | 2026-03-28 00:46:27 | INFO  | Task 6e7b30ad-1871-4c93-88a2-ea5f3f14fb46 is in state STARTED 2026-03-28 00:46:27.043547 | orchestrator | 2026-03-28 00:46:27 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:46:27.046426 | orchestrator | 2026-03-28 00:46:27 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:46:27.046490 | orchestrator | 2026-03-28 00:46:27 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:46:30.106199 | orchestrator | 2026-03-28 00:46:30 | INFO  | Task cbdebb9c-f003-4617-aa78-3271febeca3f is in state STARTED 2026-03-28 00:46:30.106627 | orchestrator | 2026-03-28 00:46:30 | INFO  | Task c6e3cb40-0a5d-4a31-83b2-d29665f623dc is in state STARTED 2026-03-28 00:46:30.107729 | orchestrator | 2026-03-28 00:46:30 | INFO  | Task 6e7b30ad-1871-4c93-88a2-ea5f3f14fb46 is in state STARTED 2026-03-28 00:46:30.108645 | orchestrator | 2026-03-28 00:46:30 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:46:30.109594 | orchestrator | 2026-03-28 00:46:30 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:46:30.109683 | orchestrator | 2026-03-28 00:46:30 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:46:33.227824 | orchestrator | 2026-03-28 00:46:33 | INFO  | Task cbdebb9c-f003-4617-aa78-3271febeca3f is in state STARTED 2026-03-28 00:46:33.227952 | orchestrator | 2026-03-28 00:46:33 | INFO  | Task c6e3cb40-0a5d-4a31-83b2-d29665f623dc is in state STARTED 2026-03-28 00:46:33.227968 | orchestrator | 2026-03-28 00:46:33 | INFO  | Task 6e7b30ad-1871-4c93-88a2-ea5f3f14fb46 is in state STARTED 2026-03-28 00:46:33.227980 | orchestrator | 2026-03-28 00:46:33 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:46:33.227991 | orchestrator | 2026-03-28 00:46:33 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:46:33.228002 | orchestrator | 2026-03-28 00:46:33 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:46:36.263339 | orchestrator | 2026-03-28 00:46:36 | INFO  | Task cbdebb9c-f003-4617-aa78-3271febeca3f is in state STARTED 2026-03-28 00:46:36.265602 | orchestrator | 2026-03-28 00:46:36 | INFO  | Task c6e3cb40-0a5d-4a31-83b2-d29665f623dc is in state STARTED 2026-03-28 00:46:36.267473 | orchestrator | 2026-03-28 00:46:36 | INFO  | Task 6e7b30ad-1871-4c93-88a2-ea5f3f14fb46 is in state STARTED 2026-03-28 00:46:36.269778 | orchestrator | 2026-03-28 00:46:36 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:46:36.271183 | orchestrator | 2026-03-28 00:46:36 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:46:36.271245 | orchestrator | 2026-03-28 00:46:36 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:46:39.326214 | orchestrator | 2026-03-28 00:46:39 | INFO  | Task cbdebb9c-f003-4617-aa78-3271febeca3f is in state STARTED 2026-03-28 00:46:39.326348 | orchestrator | 2026-03-28 00:46:39 | INFO  | Task c6e3cb40-0a5d-4a31-83b2-d29665f623dc is in state STARTED 2026-03-28 00:46:39.327065 | orchestrator | 2026-03-28 00:46:39 | INFO  | Task 6e7b30ad-1871-4c93-88a2-ea5f3f14fb46 is in state STARTED 2026-03-28 00:46:39.328290 | orchestrator | 2026-03-28 00:46:39 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:46:39.331631 | orchestrator | 2026-03-28 00:46:39 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:46:39.331717 | orchestrator | 2026-03-28 00:46:39 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:46:42.394808 | orchestrator | 2026-03-28 00:46:42 | INFO  | Task cbdebb9c-f003-4617-aa78-3271febeca3f is in state STARTED 2026-03-28 00:46:42.396077 | orchestrator | 2026-03-28 00:46:42 | INFO  | Task c6e3cb40-0a5d-4a31-83b2-d29665f623dc is in state STARTED 2026-03-28 00:46:42.399650 | orchestrator | 2026-03-28 00:46:42 | INFO  | Task 6e7b30ad-1871-4c93-88a2-ea5f3f14fb46 is in state STARTED 2026-03-28 00:46:42.403175 | orchestrator | 2026-03-28 00:46:42 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:46:42.407702 | orchestrator | 2026-03-28 00:46:42 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:46:42.407801 | orchestrator | 2026-03-28 00:46:42 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:46:45.529497 | orchestrator | 2026-03-28 00:46:45 | INFO  | Task cbdebb9c-f003-4617-aa78-3271febeca3f is in state STARTED 2026-03-28 00:46:45.538107 | orchestrator | 2026-03-28 00:46:45 | INFO  | Task c6e3cb40-0a5d-4a31-83b2-d29665f623dc is in state STARTED 2026-03-28 00:46:45.538185 | orchestrator | 2026-03-28 00:46:45 | INFO  | Task 6e7b30ad-1871-4c93-88a2-ea5f3f14fb46 is in state STARTED 2026-03-28 00:46:45.543558 | orchestrator | 2026-03-28 00:46:45 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:46:45.546184 | orchestrator | 2026-03-28 00:46:45 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:46:45.548838 | orchestrator | 2026-03-28 00:46:45 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:46:48.774479 | orchestrator | 2026-03-28 00:46:48 | INFO  | Task cbdebb9c-f003-4617-aa78-3271febeca3f is in state STARTED 2026-03-28 00:46:48.774590 | orchestrator | 2026-03-28 00:46:48 | INFO  | Task c6e3cb40-0a5d-4a31-83b2-d29665f623dc is in state STARTED 2026-03-28 00:46:48.774605 | orchestrator | 2026-03-28 00:46:48 | INFO  | Task 6e7b30ad-1871-4c93-88a2-ea5f3f14fb46 is in state STARTED 2026-03-28 00:46:48.774617 | orchestrator | 2026-03-28 00:46:48 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:46:48.774628 | orchestrator | 2026-03-28 00:46:48 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:46:48.774639 | orchestrator | 2026-03-28 00:46:48 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:46:51.784762 | orchestrator | 2026-03-28 00:46:51 | INFO  | Task cbdebb9c-f003-4617-aa78-3271febeca3f is in state STARTED 2026-03-28 00:46:51.786073 | orchestrator | 2026-03-28 00:46:51 | INFO  | Task c6e3cb40-0a5d-4a31-83b2-d29665f623dc is in state STARTED 2026-03-28 00:46:51.789165 | orchestrator | 2026-03-28 00:46:51 | INFO  | Task 6e7b30ad-1871-4c93-88a2-ea5f3f14fb46 is in state STARTED 2026-03-28 00:46:51.790261 | orchestrator | 2026-03-28 00:46:51 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:46:51.792933 | orchestrator | 2026-03-28 00:46:51 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:46:51.792977 | orchestrator | 2026-03-28 00:46:51 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:46:54.851099 | orchestrator | 2026-03-28 00:46:54 | INFO  | Task cbdebb9c-f003-4617-aa78-3271febeca3f is in state STARTED 2026-03-28 00:46:54.852265 | orchestrator | 2026-03-28 00:46:54 | INFO  | Task c6e3cb40-0a5d-4a31-83b2-d29665f623dc is in state STARTED 2026-03-28 00:46:54.853081 | orchestrator | 2026-03-28 00:46:54 | INFO  | Task 6e7b30ad-1871-4c93-88a2-ea5f3f14fb46 is in state SUCCESS 2026-03-28 00:46:54.853584 | orchestrator | 2026-03-28 00:46:54.853595 | orchestrator | 2026-03-28 00:46:54.853600 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-03-28 00:46:54.853605 | orchestrator | 2026-03-28 00:46:54.853609 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-03-28 00:46:54.853613 | orchestrator | Saturday 28 March 2026 00:45:16 +0000 (0:00:00.628) 0:00:00.628 ******** 2026-03-28 00:46:54.853618 | orchestrator | ok: [testbed-manager] => { 2026-03-28 00:46:54.853623 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-03-28 00:46:54.853628 | orchestrator | } 2026-03-28 00:46:54.853632 | orchestrator | 2026-03-28 00:46:54.853636 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-03-28 00:46:54.853640 | orchestrator | Saturday 28 March 2026 00:45:17 +0000 (0:00:00.167) 0:00:00.795 ******** 2026-03-28 00:46:54.853643 | orchestrator | ok: [testbed-manager] 2026-03-28 00:46:54.853648 | orchestrator | 2026-03-28 00:46:54.853651 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-03-28 00:46:54.853655 | orchestrator | Saturday 28 March 2026 00:45:19 +0000 (0:00:02.103) 0:00:02.899 ******** 2026-03-28 00:46:54.853659 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-03-28 00:46:54.853663 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-03-28 00:46:54.853667 | orchestrator | 2026-03-28 00:46:54.853722 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-03-28 00:46:54.853727 | orchestrator | Saturday 28 March 2026 00:45:22 +0000 (0:00:02.896) 0:00:05.796 ******** 2026-03-28 00:46:54.853731 | orchestrator | changed: [testbed-manager] 2026-03-28 00:46:54.853734 | orchestrator | 2026-03-28 00:46:54.853738 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-03-28 00:46:54.853742 | orchestrator | Saturday 28 March 2026 00:45:25 +0000 (0:00:03.303) 0:00:09.099 ******** 2026-03-28 00:46:54.853746 | orchestrator | changed: [testbed-manager] 2026-03-28 00:46:54.853750 | orchestrator | 2026-03-28 00:46:54.853753 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-03-28 00:46:54.853757 | orchestrator | Saturday 28 March 2026 00:45:27 +0000 (0:00:01.806) 0:00:10.906 ******** 2026-03-28 00:46:54.853761 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-03-28 00:46:54.853764 | orchestrator | ok: [testbed-manager] 2026-03-28 00:46:54.853768 | orchestrator | 2026-03-28 00:46:54.853772 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-03-28 00:46:54.853776 | orchestrator | Saturday 28 March 2026 00:45:55 +0000 (0:00:28.464) 0:00:39.370 ******** 2026-03-28 00:46:54.853779 | orchestrator | changed: [testbed-manager] 2026-03-28 00:46:54.853783 | orchestrator | 2026-03-28 00:46:54.853787 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:46:54.853791 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:46:54.853796 | orchestrator | 2026-03-28 00:46:54.853800 | orchestrator | 2026-03-28 00:46:54.853804 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:46:54.853807 | orchestrator | Saturday 28 March 2026 00:45:58 +0000 (0:00:03.059) 0:00:42.430 ******** 2026-03-28 00:46:54.853811 | orchestrator | =============================================================================== 2026-03-28 00:46:54.853815 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 28.46s 2026-03-28 00:46:54.853818 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 3.30s 2026-03-28 00:46:54.853822 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 3.06s 2026-03-28 00:46:54.853826 | orchestrator | osism.services.homer : Create required directories ---------------------- 2.90s 2026-03-28 00:46:54.853829 | orchestrator | osism.services.homer : Create traefik external network ------------------ 2.10s 2026-03-28 00:46:54.853833 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.81s 2026-03-28 00:46:54.853837 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.17s 2026-03-28 00:46:54.853840 | orchestrator | 2026-03-28 00:46:54.853844 | orchestrator | 2026-03-28 00:46:54.853848 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-03-28 00:46:54.853851 | orchestrator | 2026-03-28 00:46:54.853855 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-03-28 00:46:54.853859 | orchestrator | Saturday 28 March 2026 00:45:16 +0000 (0:00:00.489) 0:00:00.489 ******** 2026-03-28 00:46:54.853863 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-03-28 00:46:54.853867 | orchestrator | 2026-03-28 00:46:54.853871 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-03-28 00:46:54.853875 | orchestrator | Saturday 28 March 2026 00:45:17 +0000 (0:00:00.640) 0:00:01.130 ******** 2026-03-28 00:46:54.853878 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-03-28 00:46:54.853882 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-03-28 00:46:54.853886 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-03-28 00:46:54.853890 | orchestrator | 2026-03-28 00:46:54.853893 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-03-28 00:46:54.853901 | orchestrator | Saturday 28 March 2026 00:45:21 +0000 (0:00:04.294) 0:00:05.424 ******** 2026-03-28 00:46:54.853905 | orchestrator | changed: [testbed-manager] 2026-03-28 00:46:54.853909 | orchestrator | 2026-03-28 00:46:54.853912 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-03-28 00:46:54.853919 | orchestrator | Saturday 28 March 2026 00:45:26 +0000 (0:00:04.141) 0:00:09.566 ******** 2026-03-28 00:46:54.853929 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-03-28 00:46:54.853933 | orchestrator | ok: [testbed-manager] 2026-03-28 00:46:54.853937 | orchestrator | 2026-03-28 00:46:54.853941 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-03-28 00:46:54.853945 | orchestrator | Saturday 28 March 2026 00:46:03 +0000 (0:00:37.730) 0:00:47.297 ******** 2026-03-28 00:46:54.853948 | orchestrator | changed: [testbed-manager] 2026-03-28 00:46:54.853952 | orchestrator | 2026-03-28 00:46:54.853956 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-03-28 00:46:54.853960 | orchestrator | Saturday 28 March 2026 00:46:05 +0000 (0:00:02.093) 0:00:49.390 ******** 2026-03-28 00:46:54.853963 | orchestrator | ok: [testbed-manager] 2026-03-28 00:46:54.853967 | orchestrator | 2026-03-28 00:46:54.853971 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-03-28 00:46:54.853974 | orchestrator | Saturday 28 March 2026 00:46:06 +0000 (0:00:01.120) 0:00:50.511 ******** 2026-03-28 00:46:54.853978 | orchestrator | changed: [testbed-manager] 2026-03-28 00:46:54.853982 | orchestrator | 2026-03-28 00:46:54.853985 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-03-28 00:46:54.853989 | orchestrator | Saturday 28 March 2026 00:46:10 +0000 (0:00:03.134) 0:00:53.646 ******** 2026-03-28 00:46:54.853993 | orchestrator | changed: [testbed-manager] 2026-03-28 00:46:54.853997 | orchestrator | 2026-03-28 00:46:54.854000 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-03-28 00:46:54.854004 | orchestrator | Saturday 28 March 2026 00:46:11 +0000 (0:00:00.977) 0:00:54.623 ******** 2026-03-28 00:46:54.854008 | orchestrator | changed: [testbed-manager] 2026-03-28 00:46:54.854011 | orchestrator | 2026-03-28 00:46:54.854044 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-03-28 00:46:54.854048 | orchestrator | Saturday 28 March 2026 00:46:11 +0000 (0:00:00.588) 0:00:55.211 ******** 2026-03-28 00:46:54.854052 | orchestrator | ok: [testbed-manager] 2026-03-28 00:46:54.854056 | orchestrator | 2026-03-28 00:46:54.854060 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:46:54.854064 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:46:54.854067 | orchestrator | 2026-03-28 00:46:54.854071 | orchestrator | 2026-03-28 00:46:54.854075 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:46:54.854079 | orchestrator | Saturday 28 March 2026 00:46:12 +0000 (0:00:00.428) 0:00:55.640 ******** 2026-03-28 00:46:54.854082 | orchestrator | =============================================================================== 2026-03-28 00:46:54.854086 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 37.73s 2026-03-28 00:46:54.854090 | orchestrator | osism.services.openstackclient : Create required directories ------------ 4.29s 2026-03-28 00:46:54.854093 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 4.14s 2026-03-28 00:46:54.854097 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 3.13s 2026-03-28 00:46:54.854101 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 2.09s 2026-03-28 00:46:54.854105 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.12s 2026-03-28 00:46:54.854108 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.98s 2026-03-28 00:46:54.854112 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.64s 2026-03-28 00:46:54.854119 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.59s 2026-03-28 00:46:54.854123 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.43s 2026-03-28 00:46:54.854127 | orchestrator | 2026-03-28 00:46:54.854131 | orchestrator | 2026-03-28 00:46:54.854135 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-03-28 00:46:54.854138 | orchestrator | 2026-03-28 00:46:54.854142 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-03-28 00:46:54.854146 | orchestrator | Saturday 28 March 2026 00:45:44 +0000 (0:00:00.399) 0:00:00.399 ******** 2026-03-28 00:46:54.854150 | orchestrator | ok: [testbed-manager] 2026-03-28 00:46:54.854153 | orchestrator | 2026-03-28 00:46:54.854157 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-03-28 00:46:54.854161 | orchestrator | Saturday 28 March 2026 00:45:46 +0000 (0:00:01.798) 0:00:02.198 ******** 2026-03-28 00:46:54.854165 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-03-28 00:46:54.854169 | orchestrator | 2026-03-28 00:46:54.854172 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-03-28 00:46:54.854176 | orchestrator | Saturday 28 March 2026 00:45:47 +0000 (0:00:00.795) 0:00:02.993 ******** 2026-03-28 00:46:54.854180 | orchestrator | changed: [testbed-manager] 2026-03-28 00:46:54.854183 | orchestrator | 2026-03-28 00:46:54.854187 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-03-28 00:46:54.854191 | orchestrator | Saturday 28 March 2026 00:45:50 +0000 (0:00:02.869) 0:00:05.863 ******** 2026-03-28 00:46:54.854194 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-03-28 00:46:54.854198 | orchestrator | ok: [testbed-manager] 2026-03-28 00:46:54.854202 | orchestrator | 2026-03-28 00:46:54.854206 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-03-28 00:46:54.854209 | orchestrator | Saturday 28 March 2026 00:46:47 +0000 (0:00:57.786) 0:01:03.650 ******** 2026-03-28 00:46:54.854213 | orchestrator | changed: [testbed-manager] 2026-03-28 00:46:54.854217 | orchestrator | 2026-03-28 00:46:54.854221 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:46:54.854226 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:46:54.854230 | orchestrator | 2026-03-28 00:46:54.854234 | orchestrator | 2026-03-28 00:46:54.854241 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:46:54.854248 | orchestrator | Saturday 28 March 2026 00:46:52 +0000 (0:00:04.710) 0:01:08.360 ******** 2026-03-28 00:46:54.854253 | orchestrator | =============================================================================== 2026-03-28 00:46:54.854257 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 57.79s 2026-03-28 00:46:54.854261 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 4.71s 2026-03-28 00:46:54.854266 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 2.87s 2026-03-28 00:46:54.854270 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.80s 2026-03-28 00:46:54.854274 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.80s 2026-03-28 00:46:54.855425 | orchestrator | 2026-03-28 00:46:54 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:46:54.856802 | orchestrator | 2026-03-28 00:46:54 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:46:54.856937 | orchestrator | 2026-03-28 00:46:54 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:46:57.908636 | orchestrator | 2026-03-28 00:46:57 | INFO  | Task cbdebb9c-f003-4617-aa78-3271febeca3f is in state STARTED 2026-03-28 00:46:57.910177 | orchestrator | 2026-03-28 00:46:57 | INFO  | Task c6e3cb40-0a5d-4a31-83b2-d29665f623dc is in state STARTED 2026-03-28 00:46:57.911903 | orchestrator | 2026-03-28 00:46:57 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:46:57.915693 | orchestrator | 2026-03-28 00:46:57 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:46:57.915735 | orchestrator | 2026-03-28 00:46:57 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:47:00.971403 | orchestrator | 2026-03-28 00:47:00.971535 | orchestrator | 2026-03-28 00:47:00.971560 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 00:47:00.971581 | orchestrator | 2026-03-28 00:47:00.971599 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 00:47:00.971619 | orchestrator | Saturday 28 March 2026 00:45:18 +0000 (0:00:01.373) 0:00:01.373 ******** 2026-03-28 00:47:00.971639 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-03-28 00:47:00.971658 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-03-28 00:47:00.971677 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-03-28 00:47:00.971695 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-03-28 00:47:00.971713 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-03-28 00:47:00.971731 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-03-28 00:47:00.971749 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-03-28 00:47:00.971767 | orchestrator | 2026-03-28 00:47:00.971785 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-03-28 00:47:00.971803 | orchestrator | 2026-03-28 00:47:00.971822 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-03-28 00:47:00.971840 | orchestrator | Saturday 28 March 2026 00:45:21 +0000 (0:00:02.937) 0:00:04.310 ******** 2026-03-28 00:47:00.971884 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:47:00.971914 | orchestrator | 2026-03-28 00:47:00.971933 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-03-28 00:47:00.971951 | orchestrator | Saturday 28 March 2026 00:45:23 +0000 (0:00:01.911) 0:00:06.222 ******** 2026-03-28 00:47:00.971969 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:47:00.971988 | orchestrator | ok: [testbed-manager] 2026-03-28 00:47:00.972006 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:47:00.972025 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:47:00.972043 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:47:00.972061 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:47:00.972079 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:47:00.972097 | orchestrator | 2026-03-28 00:47:00.972116 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-03-28 00:47:00.972134 | orchestrator | Saturday 28 March 2026 00:45:26 +0000 (0:00:03.136) 0:00:09.359 ******** 2026-03-28 00:47:00.972153 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:47:00.972170 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:47:00.972188 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:47:00.972205 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:47:00.972224 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:47:00.972241 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:47:00.972259 | orchestrator | ok: [testbed-manager] 2026-03-28 00:47:00.972277 | orchestrator | 2026-03-28 00:47:00.972295 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-03-28 00:47:00.972314 | orchestrator | Saturday 28 March 2026 00:45:31 +0000 (0:00:05.213) 0:00:14.573 ******** 2026-03-28 00:47:00.972333 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:47:00.972379 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:47:00.972398 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:47:00.972417 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:47:00.972468 | orchestrator | changed: [testbed-manager] 2026-03-28 00:47:00.972489 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:47:00.972507 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:47:00.972525 | orchestrator | 2026-03-28 00:47:00.972545 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-03-28 00:47:00.972579 | orchestrator | Saturday 28 March 2026 00:45:34 +0000 (0:00:03.501) 0:00:18.074 ******** 2026-03-28 00:47:00.972591 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:47:00.972602 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:47:00.972613 | orchestrator | changed: [testbed-manager] 2026-03-28 00:47:00.972624 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:47:00.972635 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:47:00.972646 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:47:00.972657 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:47:00.972667 | orchestrator | 2026-03-28 00:47:00.972678 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-03-28 00:47:00.972689 | orchestrator | Saturday 28 March 2026 00:45:45 +0000 (0:00:10.423) 0:00:28.498 ******** 2026-03-28 00:47:00.972699 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:47:00.972710 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:47:00.972721 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:47:00.972731 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:47:00.972742 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:47:00.972752 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:47:00.972763 | orchestrator | changed: [testbed-manager] 2026-03-28 00:47:00.972774 | orchestrator | 2026-03-28 00:47:00.972784 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-03-28 00:47:00.972793 | orchestrator | Saturday 28 March 2026 00:46:29 +0000 (0:00:43.896) 0:01:12.394 ******** 2026-03-28 00:47:00.972804 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:47:00.972815 | orchestrator | 2026-03-28 00:47:00.972825 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-03-28 00:47:00.972835 | orchestrator | Saturday 28 March 2026 00:46:30 +0000 (0:00:01.669) 0:01:14.064 ******** 2026-03-28 00:47:00.972844 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-03-28 00:47:00.972854 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-03-28 00:47:00.972865 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-03-28 00:47:00.972875 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-03-28 00:47:00.972906 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-03-28 00:47:00.972916 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-03-28 00:47:00.972926 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-03-28 00:47:00.972935 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-03-28 00:47:00.972945 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-03-28 00:47:00.972955 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-03-28 00:47:00.972964 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-03-28 00:47:00.972974 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-03-28 00:47:00.972983 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-03-28 00:47:00.972993 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-03-28 00:47:00.973002 | orchestrator | 2026-03-28 00:47:00.973012 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-03-28 00:47:00.973023 | orchestrator | Saturday 28 March 2026 00:46:35 +0000 (0:00:04.217) 0:01:18.282 ******** 2026-03-28 00:47:00.973032 | orchestrator | ok: [testbed-manager] 2026-03-28 00:47:00.973042 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:47:00.973052 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:47:00.973070 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:47:00.973080 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:47:00.973090 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:47:00.973099 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:47:00.973108 | orchestrator | 2026-03-28 00:47:00.973118 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-03-28 00:47:00.973128 | orchestrator | Saturday 28 March 2026 00:46:36 +0000 (0:00:01.302) 0:01:19.585 ******** 2026-03-28 00:47:00.973138 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:47:00.973147 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:47:00.973157 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:47:00.973168 | orchestrator | changed: [testbed-manager] 2026-03-28 00:47:00.973184 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:47:00.973200 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:47:00.973216 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:47:00.973232 | orchestrator | 2026-03-28 00:47:00.973247 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-03-28 00:47:00.973263 | orchestrator | Saturday 28 March 2026 00:46:37 +0000 (0:00:01.588) 0:01:21.173 ******** 2026-03-28 00:47:00.973280 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:47:00.973294 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:47:00.973308 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:47:00.973323 | orchestrator | ok: [testbed-manager] 2026-03-28 00:47:00.973338 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:47:00.973389 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:47:00.973405 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:47:00.973422 | orchestrator | 2026-03-28 00:47:00.973438 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-03-28 00:47:00.973455 | orchestrator | Saturday 28 March 2026 00:46:39 +0000 (0:00:01.645) 0:01:22.819 ******** 2026-03-28 00:47:00.973471 | orchestrator | ok: [testbed-manager] 2026-03-28 00:47:00.973484 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:47:00.973498 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:47:00.973512 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:47:00.973529 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:47:00.973546 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:47:00.973562 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:47:00.973578 | orchestrator | 2026-03-28 00:47:00.973593 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-03-28 00:47:00.973610 | orchestrator | Saturday 28 March 2026 00:46:42 +0000 (0:00:02.839) 0:01:25.659 ******** 2026-03-28 00:47:00.973627 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-03-28 00:47:00.973657 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:47:00.973676 | orchestrator | 2026-03-28 00:47:00.973693 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-03-28 00:47:00.973708 | orchestrator | Saturday 28 March 2026 00:46:44 +0000 (0:00:02.113) 0:01:27.772 ******** 2026-03-28 00:47:00.973719 | orchestrator | changed: [testbed-manager] 2026-03-28 00:47:00.973729 | orchestrator | 2026-03-28 00:47:00.973738 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-03-28 00:47:00.973748 | orchestrator | Saturday 28 March 2026 00:46:48 +0000 (0:00:03.739) 0:01:31.512 ******** 2026-03-28 00:47:00.973757 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:47:00.973772 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:47:00.973788 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:47:00.973804 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:47:00.973820 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:47:00.973834 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:47:00.973850 | orchestrator | changed: [testbed-manager] 2026-03-28 00:47:00.973865 | orchestrator | 2026-03-28 00:47:00.973880 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:47:00.973912 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:47:00.973930 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:47:00.973946 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:47:00.973961 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:47:00.973996 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:47:00.974011 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:47:00.974114 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:47:00.974132 | orchestrator | 2026-03-28 00:47:00.974148 | orchestrator | 2026-03-28 00:47:00.974163 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:47:00.974179 | orchestrator | Saturday 28 March 2026 00:47:00 +0000 (0:00:11.916) 0:01:43.428 ******** 2026-03-28 00:47:00.974195 | orchestrator | =============================================================================== 2026-03-28 00:47:00.974257 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 43.90s 2026-03-28 00:47:00.974275 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.92s 2026-03-28 00:47:00.974292 | orchestrator | osism.services.netdata : Add repository -------------------------------- 10.42s 2026-03-28 00:47:00.974308 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 5.21s 2026-03-28 00:47:00.974324 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 4.22s 2026-03-28 00:47:00.974340 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 3.74s 2026-03-28 00:47:00.974419 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 3.50s 2026-03-28 00:47:00.974436 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 3.14s 2026-03-28 00:47:00.974453 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.94s 2026-03-28 00:47:00.974469 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.84s 2026-03-28 00:47:00.974485 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 2.11s 2026-03-28 00:47:00.974702 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.91s 2026-03-28 00:47:00.974721 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.67s 2026-03-28 00:47:00.974736 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.65s 2026-03-28 00:47:00.974751 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.59s 2026-03-28 00:47:00.974768 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.30s 2026-03-28 00:47:00.974787 | orchestrator | 2026-03-28 00:47:00 | INFO  | Task cbdebb9c-f003-4617-aa78-3271febeca3f is in state SUCCESS 2026-03-28 00:47:00.974814 | orchestrator | 2026-03-28 00:47:00 | INFO  | Task c6e3cb40-0a5d-4a31-83b2-d29665f623dc is in state STARTED 2026-03-28 00:47:00.978284 | orchestrator | 2026-03-28 00:47:00 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:47:00.980196 | orchestrator | 2026-03-28 00:47:00 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:47:00.980244 | orchestrator | 2026-03-28 00:47:00 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:47:04.032150 | orchestrator | 2026-03-28 00:47:04 | INFO  | Task c6e3cb40-0a5d-4a31-83b2-d29665f623dc is in state STARTED 2026-03-28 00:47:04.035693 | orchestrator | 2026-03-28 00:47:04 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:47:04.037783 | orchestrator | 2026-03-28 00:47:04 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:47:04.038611 | orchestrator | 2026-03-28 00:47:04 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:47:07.083103 | orchestrator | 2026-03-28 00:47:07 | INFO  | Task c6e3cb40-0a5d-4a31-83b2-d29665f623dc is in state STARTED 2026-03-28 00:47:07.092499 | orchestrator | 2026-03-28 00:47:07 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:47:07.093667 | orchestrator | 2026-03-28 00:47:07 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:47:07.094073 | orchestrator | 2026-03-28 00:47:07 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:47:10.148810 | orchestrator | 2026-03-28 00:47:10 | INFO  | Task c6e3cb40-0a5d-4a31-83b2-d29665f623dc is in state STARTED 2026-03-28 00:47:10.150440 | orchestrator | 2026-03-28 00:47:10 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:47:10.154756 | orchestrator | 2026-03-28 00:47:10 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:47:10.155908 | orchestrator | 2026-03-28 00:47:10 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:47:13.208696 | orchestrator | 2026-03-28 00:47:13 | INFO  | Task c6e3cb40-0a5d-4a31-83b2-d29665f623dc is in state STARTED 2026-03-28 00:47:13.211088 | orchestrator | 2026-03-28 00:47:13 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:47:13.213607 | orchestrator | 2026-03-28 00:47:13 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:47:13.213647 | orchestrator | 2026-03-28 00:47:13 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:47:16.276111 | orchestrator | 2026-03-28 00:47:16 | INFO  | Task c6e3cb40-0a5d-4a31-83b2-d29665f623dc is in state STARTED 2026-03-28 00:47:16.278393 | orchestrator | 2026-03-28 00:47:16 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:47:16.280564 | orchestrator | 2026-03-28 00:47:16 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:47:16.280609 | orchestrator | 2026-03-28 00:47:16 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:47:19.320565 | orchestrator | 2026-03-28 00:47:19 | INFO  | Task c6e3cb40-0a5d-4a31-83b2-d29665f623dc is in state STARTED 2026-03-28 00:47:19.320784 | orchestrator | 2026-03-28 00:47:19 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:47:19.321752 | orchestrator | 2026-03-28 00:47:19 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:47:19.322100 | orchestrator | 2026-03-28 00:47:19 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:47:22.392171 | orchestrator | 2026-03-28 00:47:22 | INFO  | Task c6e3cb40-0a5d-4a31-83b2-d29665f623dc is in state STARTED 2026-03-28 00:47:22.393386 | orchestrator | 2026-03-28 00:47:22 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:47:22.395374 | orchestrator | 2026-03-28 00:47:22 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:47:22.395416 | orchestrator | 2026-03-28 00:47:22 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:47:25.451551 | orchestrator | 2026-03-28 00:47:25 | INFO  | Task c6e3cb40-0a5d-4a31-83b2-d29665f623dc is in state STARTED 2026-03-28 00:47:25.453724 | orchestrator | 2026-03-28 00:47:25 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:47:25.456903 | orchestrator | 2026-03-28 00:47:25 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:47:25.456981 | orchestrator | 2026-03-28 00:47:25 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:47:28.520032 | orchestrator | 2026-03-28 00:47:28 | INFO  | Task c6e3cb40-0a5d-4a31-83b2-d29665f623dc is in state STARTED 2026-03-28 00:47:28.524987 | orchestrator | 2026-03-28 00:47:28 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:47:28.527972 | orchestrator | 2026-03-28 00:47:28 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:47:28.528042 | orchestrator | 2026-03-28 00:47:28 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:47:31.570839 | orchestrator | 2026-03-28 00:47:31 | INFO  | Task c6e3cb40-0a5d-4a31-83b2-d29665f623dc is in state STARTED 2026-03-28 00:47:31.574892 | orchestrator | 2026-03-28 00:47:31 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:47:31.577529 | orchestrator | 2026-03-28 00:47:31 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:47:31.577623 | orchestrator | 2026-03-28 00:47:31 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:47:34.640718 | orchestrator | 2026-03-28 00:47:34 | INFO  | Task c6e3cb40-0a5d-4a31-83b2-d29665f623dc is in state STARTED 2026-03-28 00:47:34.642817 | orchestrator | 2026-03-28 00:47:34 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:47:34.645068 | orchestrator | 2026-03-28 00:47:34 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:47:34.645125 | orchestrator | 2026-03-28 00:47:34 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:47:37.698699 | orchestrator | 2026-03-28 00:47:37 | INFO  | Task c6e3cb40-0a5d-4a31-83b2-d29665f623dc is in state STARTED 2026-03-28 00:47:37.700854 | orchestrator | 2026-03-28 00:47:37 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:47:37.703252 | orchestrator | 2026-03-28 00:47:37 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:47:37.703335 | orchestrator | 2026-03-28 00:47:37 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:47:40.751069 | orchestrator | 2026-03-28 00:47:40 | INFO  | Task c6e3cb40-0a5d-4a31-83b2-d29665f623dc is in state STARTED 2026-03-28 00:47:40.752340 | orchestrator | 2026-03-28 00:47:40 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:47:40.754459 | orchestrator | 2026-03-28 00:47:40 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:47:40.754498 | orchestrator | 2026-03-28 00:47:40 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:47:43.789436 | orchestrator | 2026-03-28 00:47:43 | INFO  | Task c6e3cb40-0a5d-4a31-83b2-d29665f623dc is in state STARTED 2026-03-28 00:47:43.792511 | orchestrator | 2026-03-28 00:47:43 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:47:43.793643 | orchestrator | 2026-03-28 00:47:43 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:47:43.793659 | orchestrator | 2026-03-28 00:47:43 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:47:46.845908 | orchestrator | 2026-03-28 00:47:46 | INFO  | Task c6e3cb40-0a5d-4a31-83b2-d29665f623dc is in state STARTED 2026-03-28 00:47:46.848357 | orchestrator | 2026-03-28 00:47:46 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:47:46.850498 | orchestrator | 2026-03-28 00:47:46 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:47:46.850918 | orchestrator | 2026-03-28 00:47:46 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:47:49.893989 | orchestrator | 2026-03-28 00:47:49 | INFO  | Task c6e3cb40-0a5d-4a31-83b2-d29665f623dc is in state STARTED 2026-03-28 00:47:49.896229 | orchestrator | 2026-03-28 00:47:49 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:47:49.896892 | orchestrator | 2026-03-28 00:47:49 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:47:49.896961 | orchestrator | 2026-03-28 00:47:49 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:47:52.940379 | orchestrator | 2026-03-28 00:47:52 | INFO  | Task c6e3cb40-0a5d-4a31-83b2-d29665f623dc is in state STARTED 2026-03-28 00:47:52.941630 | orchestrator | 2026-03-28 00:47:52 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:47:52.941894 | orchestrator | 2026-03-28 00:47:52 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:47:52.942118 | orchestrator | 2026-03-28 00:47:52 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:47:55.987068 | orchestrator | 2026-03-28 00:47:55 | INFO  | Task c6e3cb40-0a5d-4a31-83b2-d29665f623dc is in state STARTED 2026-03-28 00:47:55.988936 | orchestrator | 2026-03-28 00:47:55 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:47:55.990741 | orchestrator | 2026-03-28 00:47:55 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:47:55.990779 | orchestrator | 2026-03-28 00:47:55 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:47:59.034849 | orchestrator | 2026-03-28 00:47:59 | INFO  | Task c6e3cb40-0a5d-4a31-83b2-d29665f623dc is in state STARTED 2026-03-28 00:47:59.035647 | orchestrator | 2026-03-28 00:47:59 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:47:59.036763 | orchestrator | 2026-03-28 00:47:59 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:47:59.036802 | orchestrator | 2026-03-28 00:47:59 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:48:02.069728 | orchestrator | 2026-03-28 00:48:02 | INFO  | Task c6e3cb40-0a5d-4a31-83b2-d29665f623dc is in state STARTED 2026-03-28 00:48:02.070938 | orchestrator | 2026-03-28 00:48:02 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:48:02.071651 | orchestrator | 2026-03-28 00:48:02 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:48:02.071886 | orchestrator | 2026-03-28 00:48:02 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:48:05.108914 | orchestrator | 2026-03-28 00:48:05 | INFO  | Task c6e3cb40-0a5d-4a31-83b2-d29665f623dc is in state STARTED 2026-03-28 00:48:05.110099 | orchestrator | 2026-03-28 00:48:05 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:48:05.112114 | orchestrator | 2026-03-28 00:48:05 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:48:05.112238 | orchestrator | 2026-03-28 00:48:05 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:48:08.200863 | orchestrator | 2026-03-28 00:48:08.201067 | orchestrator | 2026-03-28 00:48:08.201084 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-03-28 00:48:08.201095 | orchestrator | 2026-03-28 00:48:08.201122 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-28 00:48:08.201133 | orchestrator | Saturday 28 March 2026 00:45:09 +0000 (0:00:00.334) 0:00:00.334 ******** 2026-03-28 00:48:08.201143 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:48:08.201154 | orchestrator | 2026-03-28 00:48:08.201218 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-03-28 00:48:08.201229 | orchestrator | Saturday 28 March 2026 00:45:10 +0000 (0:00:01.552) 0:00:01.887 ******** 2026-03-28 00:48:08.201239 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-28 00:48:08.201248 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-28 00:48:08.201285 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-28 00:48:08.201299 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-28 00:48:08.201308 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-28 00:48:08.201318 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-28 00:48:08.201327 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-28 00:48:08.201337 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-28 00:48:08.201346 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-28 00:48:08.201356 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-28 00:48:08.201366 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-28 00:48:08.201376 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-28 00:48:08.201386 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-28 00:48:08.201402 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-28 00:48:08.201414 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-28 00:48:08.201430 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-28 00:48:08.201443 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-28 00:48:08.201454 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-28 00:48:08.201465 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-28 00:48:08.201476 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-28 00:48:08.201488 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-28 00:48:08.201499 | orchestrator | 2026-03-28 00:48:08.201511 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-28 00:48:08.201528 | orchestrator | Saturday 28 March 2026 00:45:15 +0000 (0:00:04.795) 0:00:06.682 ******** 2026-03-28 00:48:08.201548 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:48:08.201561 | orchestrator | 2026-03-28 00:48:08.201572 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-03-28 00:48:08.201583 | orchestrator | Saturday 28 March 2026 00:45:17 +0000 (0:00:01.684) 0:00:08.366 ******** 2026-03-28 00:48:08.201600 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:48:08.201627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:48:08.201661 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:48:08.201674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:48:08.201685 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:48:08.201697 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:48:08.201710 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:48:08.201755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:48:08.201774 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:48:08.201798 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:48:08.201809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:48:08.201819 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:48:08.201830 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:48:08.201840 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:48:08.201864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:48:08.201881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:48:08.201891 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:48:08.201907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:48:08.201918 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:48:08.201927 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:48:08.201937 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:48:08.201947 | orchestrator | 2026-03-28 00:48:08.201957 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-03-28 00:48:08.201967 | orchestrator | Saturday 28 March 2026 00:45:24 +0000 (0:00:07.452) 0:00:15.819 ******** 2026-03-28 00:48:08.201985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 00:48:08.201997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 00:48:08.202108 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 00:48:08.202126 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:48:08.202145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:48:08.202155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:48:08.202165 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:48:08.202176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:48:08.202186 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:48:08.202197 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:48:08.202214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 00:48:08.202236 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 00:48:08.202247 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 00:48:08.202310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:48:08.202328 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:48:08.202346 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:48:08.202357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:48:08.202367 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:48:08.202377 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:48:08.202387 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 00:48:08.202404 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:48:08.202414 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:48:08.202424 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:48:08.202434 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:48:08.202453 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:48:08.202464 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:48:08.202474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:48:08.202484 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:48:08.202493 | orchestrator | 2026-03-28 00:48:08.202503 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-03-28 00:48:08.202513 | orchestrator | Saturday 28 March 2026 00:45:30 +0000 (0:00:05.646) 0:00:21.465 ******** 2026-03-28 00:48:08.202523 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 00:48:08.202533 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:48:08.202549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 00:48:08.202568 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:48:08.202578 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:48:08.202588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 00:48:08.202598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 00:48:08.202626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:48:08.202636 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 00:48:08.202646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:48:08.202661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:48:08.202671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:48:08.202681 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:48:08.202694 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:48:08.202705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:48:08.202714 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:48:08.202743 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/lo2026-03-28 00:48:08 | INFO  | Task c6e3cb40-0a5d-4a31-83b2-d29665f623dc is in state SUCCESS 2026-03-28 00:48:08.202754 | orchestrator | g/kolla/'], 'dimensions': {}}})  2026-03-28 00:48:08.202764 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:48:08.202774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:48:08.202784 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:48:08.202794 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 00:48:08.202815 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 00:48:08.202825 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:48:08.202839 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:48:08.202849 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:48:08.202859 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:48:08.202869 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:48:08.202879 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:48:08.202888 | orchestrator | 2026-03-28 00:48:08.202898 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-03-28 00:48:08.202908 | orchestrator | Saturday 28 March 2026 00:45:36 +0000 (0:00:06.368) 0:00:27.833 ******** 2026-03-28 00:48:08.202918 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:48:08.202927 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:48:08.202942 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:48:08.202953 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:48:08.202962 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:48:08.202971 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:48:08.202981 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:48:08.202990 | orchestrator | 2026-03-28 00:48:08.203000 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-03-28 00:48:08.203010 | orchestrator | Saturday 28 March 2026 00:45:39 +0000 (0:00:02.560) 0:00:30.394 ******** 2026-03-28 00:48:08.203020 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:48:08.203036 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:48:08.203045 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:48:08.203055 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:48:08.203064 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:48:08.203073 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:48:08.203083 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:48:08.203092 | orchestrator | 2026-03-28 00:48:08.203102 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-03-28 00:48:08.203111 | orchestrator | Saturday 28 March 2026 00:45:40 +0000 (0:00:01.181) 0:00:31.576 ******** 2026-03-28 00:48:08.203121 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:48:08.203130 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:48:08.203140 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:48:08.203149 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:48:08.203159 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:48:08.203168 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:48:08.203178 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:48:08.203188 | orchestrator | 2026-03-28 00:48:08.203197 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-03-28 00:48:08.203207 | orchestrator | Saturday 28 March 2026 00:45:43 +0000 (0:00:03.368) 0:00:34.945 ******** 2026-03-28 00:48:08.203216 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:48:08.203226 | orchestrator | changed: [testbed-manager] 2026-03-28 00:48:08.203235 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:48:08.203245 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:48:08.203274 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:48:08.203285 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:48:08.203295 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:48:08.203304 | orchestrator | 2026-03-28 00:48:08.203314 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-03-28 00:48:08.203323 | orchestrator | Saturday 28 March 2026 00:45:47 +0000 (0:00:03.295) 0:00:38.240 ******** 2026-03-28 00:48:08.203333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:48:08.203344 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:48:08.203358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:48:08.203368 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:48:08.203389 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:48:08.203400 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:48:08.203410 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:48:08.203419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:48:08.203429 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:48:08.203443 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:48:08.203454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:48:08.203475 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:48:08.203485 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:48:08.203495 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:48:08.203505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:48:08.203515 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:48:08.203525 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:48:08.203539 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:48:08.203549 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:48:08.203563 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:48:08.203579 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:48:08.203589 | orchestrator | 2026-03-28 00:48:08.203598 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-03-28 00:48:08.203608 | orchestrator | Saturday 28 March 2026 00:45:53 +0000 (0:00:06.703) 0:00:44.944 ******** 2026-03-28 00:48:08.203618 | orchestrator | [WARNING]: Skipped 2026-03-28 00:48:08.203628 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-03-28 00:48:08.203638 | orchestrator | to this access issue: 2026-03-28 00:48:08.203647 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-03-28 00:48:08.203657 | orchestrator | directory 2026-03-28 00:48:08.203666 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 00:48:08.203676 | orchestrator | 2026-03-28 00:48:08.203685 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-03-28 00:48:08.203694 | orchestrator | Saturday 28 March 2026 00:45:55 +0000 (0:00:01.527) 0:00:46.471 ******** 2026-03-28 00:48:08.203704 | orchestrator | [WARNING]: Skipped 2026-03-28 00:48:08.203713 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-03-28 00:48:08.203723 | orchestrator | to this access issue: 2026-03-28 00:48:08.203733 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-03-28 00:48:08.203742 | orchestrator | directory 2026-03-28 00:48:08.203752 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 00:48:08.203761 | orchestrator | 2026-03-28 00:48:08.203771 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-03-28 00:48:08.203780 | orchestrator | Saturday 28 March 2026 00:45:56 +0000 (0:00:01.503) 0:00:47.975 ******** 2026-03-28 00:48:08.203790 | orchestrator | [WARNING]: Skipped 2026-03-28 00:48:08.203800 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-03-28 00:48:08.203809 | orchestrator | to this access issue: 2026-03-28 00:48:08.203819 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-03-28 00:48:08.203828 | orchestrator | directory 2026-03-28 00:48:08.203837 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 00:48:08.203847 | orchestrator | 2026-03-28 00:48:08.203857 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-03-28 00:48:08.203866 | orchestrator | Saturday 28 March 2026 00:45:58 +0000 (0:00:01.405) 0:00:49.381 ******** 2026-03-28 00:48:08.203875 | orchestrator | [WARNING]: Skipped 2026-03-28 00:48:08.203885 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-03-28 00:48:08.203894 | orchestrator | to this access issue: 2026-03-28 00:48:08.203904 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-03-28 00:48:08.203919 | orchestrator | directory 2026-03-28 00:48:08.203929 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 00:48:08.203938 | orchestrator | 2026-03-28 00:48:08.203948 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-03-28 00:48:08.203957 | orchestrator | Saturday 28 March 2026 00:45:59 +0000 (0:00:01.733) 0:00:51.115 ******** 2026-03-28 00:48:08.203967 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:48:08.203976 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:48:08.203986 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:48:08.203995 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:48:08.204004 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:48:08.204014 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:48:08.204023 | orchestrator | changed: [testbed-manager] 2026-03-28 00:48:08.204033 | orchestrator | 2026-03-28 00:48:08.204042 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-03-28 00:48:08.204052 | orchestrator | Saturday 28 March 2026 00:46:07 +0000 (0:00:07.142) 0:00:58.257 ******** 2026-03-28 00:48:08.204061 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-28 00:48:08.204075 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-28 00:48:08.204085 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-28 00:48:08.204095 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-28 00:48:08.204104 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-28 00:48:08.204114 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-28 00:48:08.204123 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-28 00:48:08.204132 | orchestrator | 2026-03-28 00:48:08.204142 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-03-28 00:48:08.204151 | orchestrator | Saturday 28 March 2026 00:46:12 +0000 (0:00:05.129) 0:01:03.387 ******** 2026-03-28 00:48:08.204161 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:48:08.204170 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:48:08.204180 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:48:08.204189 | orchestrator | changed: [testbed-manager] 2026-03-28 00:48:08.204198 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:48:08.204208 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:48:08.204217 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:48:08.204226 | orchestrator | 2026-03-28 00:48:08.204236 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-03-28 00:48:08.204246 | orchestrator | Saturday 28 March 2026 00:46:15 +0000 (0:00:03.189) 0:01:06.576 ******** 2026-03-28 00:48:08.204283 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:48:08.204304 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:48:08.204324 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:48:08.204335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:48:08.204345 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:48:08.204360 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:48:08.204371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:48:08.204392 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:48:08.204403 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:48:08.204413 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:48:08.204429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:48:08.204439 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:48:08.204449 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:48:08.204459 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:48:08.204473 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:48:08.204489 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:48:08.204500 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:48:08.204519 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:48:08.204529 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:48:08.204539 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:48:08.204549 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:48:08.204559 | orchestrator | 2026-03-28 00:48:08.204568 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-03-28 00:48:08.204578 | orchestrator | Saturday 28 March 2026 00:46:18 +0000 (0:00:02.992) 0:01:09.569 ******** 2026-03-28 00:48:08.204588 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-28 00:48:08.204601 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-28 00:48:08.204611 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-28 00:48:08.204620 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-28 00:48:08.204629 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-28 00:48:08.204639 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-28 00:48:08.204648 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-28 00:48:08.204658 | orchestrator | 2026-03-28 00:48:08.204667 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-03-28 00:48:08.204677 | orchestrator | Saturday 28 March 2026 00:46:21 +0000 (0:00:02.897) 0:01:12.466 ******** 2026-03-28 00:48:08.204687 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-28 00:48:08.204696 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-28 00:48:08.204706 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-28 00:48:08.204715 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-28 00:48:08.204724 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-28 00:48:08.204744 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-28 00:48:08.204755 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-28 00:48:08.204764 | orchestrator | 2026-03-28 00:48:08.204774 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-03-28 00:48:08.204783 | orchestrator | Saturday 28 March 2026 00:46:24 +0000 (0:00:02.936) 0:01:15.403 ******** 2026-03-28 00:48:08.204793 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:48:08.204803 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:48:08.204813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:48:08.204823 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:48:08.204838 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:48:08.204848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:48:08.204858 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:48:08.204881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:48:08.204892 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:48:08.204902 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:48:08.204912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:48:08.204922 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:48:08.204936 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-28 00:48:08.204947 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:48:08.204968 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:48:08.204979 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:48:08.204990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:48:08.205000 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:48:08.205010 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:48:08.205021 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:48:08.205034 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:48:08.205044 | orchestrator | 2026-03-28 00:48:08.205054 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-03-28 00:48:08.205063 | orchestrator | Saturday 28 March 2026 00:46:29 +0000 (0:00:05.047) 0:01:20.451 ******** 2026-03-28 00:48:08.205078 | orchestrator | changed: [testbed-manager] => { 2026-03-28 00:48:08.205089 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 00:48:08.205098 | orchestrator | } 2026-03-28 00:48:08.205108 | orchestrator | changed: [testbed-node-0] => { 2026-03-28 00:48:08.205118 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 00:48:08.205127 | orchestrator | } 2026-03-28 00:48:08.205136 | orchestrator | changed: [testbed-node-1] => { 2026-03-28 00:48:08.205146 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 00:48:08.205155 | orchestrator | } 2026-03-28 00:48:08.205165 | orchestrator | changed: [testbed-node-2] => { 2026-03-28 00:48:08.205174 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 00:48:08.205184 | orchestrator | } 2026-03-28 00:48:08.205193 | orchestrator | changed: [testbed-node-3] => { 2026-03-28 00:48:08.205202 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 00:48:08.205212 | orchestrator | } 2026-03-28 00:48:08.205221 | orchestrator | changed: [testbed-node-4] => { 2026-03-28 00:48:08.205230 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 00:48:08.205240 | orchestrator | } 2026-03-28 00:48:08.205249 | orchestrator | changed: [testbed-node-5] => { 2026-03-28 00:48:08.205277 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 00:48:08.205288 | orchestrator | } 2026-03-28 00:48:08.205297 | orchestrator | 2026-03-28 00:48:08.205307 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-28 00:48:08.205317 | orchestrator | Saturday 28 March 2026 00:46:30 +0000 (0:00:00.969) 0:01:21.420 ******** 2026-03-28 00:48:08.205333 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 00:48:08.205344 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:48:08.205354 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:48:08.205364 | orchestrator | skipping: [testbed-manager] 2026-03-28 00:48:08.205374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 00:48:08.205384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:48:08.205404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:48:08.205415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 00:48:08.205431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:48:08.205441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:48:08.205451 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:48:08.205461 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:48:08.205470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 00:48:08.205480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:48:08.205491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:48:08.205507 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 00:48:08.205520 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:48:08.205530 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:48:08.205540 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:48:08.205550 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:48:08.205565 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 00:48:08.205576 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:48:08.205586 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:48:08.205596 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:48:08.205606 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-28 00:48:08.205622 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:48:08.205632 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:48:08.205645 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:48:08.205655 | orchestrator | 2026-03-28 00:48:08.205665 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-03-28 00:48:08.205675 | orchestrator | Saturday 28 March 2026 00:46:32 +0000 (0:00:02.258) 0:01:23.679 ******** 2026-03-28 00:48:08.205685 | orchestrator | changed: [testbed-manager] 2026-03-28 00:48:08.205694 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:48:08.205703 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:48:08.205713 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:48:08.205722 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:48:08.205732 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:48:08.205741 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:48:08.205751 | orchestrator | 2026-03-28 00:48:08.205760 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-03-28 00:48:08.205770 | orchestrator | Saturday 28 March 2026 00:46:34 +0000 (0:00:02.060) 0:01:25.739 ******** 2026-03-28 00:48:08.205779 | orchestrator | changed: [testbed-manager] 2026-03-28 00:48:08.205789 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:48:08.205798 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:48:08.205808 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:48:08.205817 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:48:08.205827 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:48:08.205836 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:48:08.205845 | orchestrator | 2026-03-28 00:48:08.205855 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-28 00:48:08.205865 | orchestrator | Saturday 28 March 2026 00:46:36 +0000 (0:00:01.565) 0:01:27.305 ******** 2026-03-28 00:48:08.205874 | orchestrator | 2026-03-28 00:48:08.205884 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-28 00:48:08.205893 | orchestrator | Saturday 28 March 2026 00:46:36 +0000 (0:00:00.088) 0:01:27.394 ******** 2026-03-28 00:48:08.205903 | orchestrator | 2026-03-28 00:48:08.205913 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-28 00:48:08.205928 | orchestrator | Saturday 28 March 2026 00:46:36 +0000 (0:00:00.067) 0:01:27.461 ******** 2026-03-28 00:48:08.205938 | orchestrator | 2026-03-28 00:48:08.205947 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-28 00:48:08.205956 | orchestrator | Saturday 28 March 2026 00:46:36 +0000 (0:00:00.064) 0:01:27.526 ******** 2026-03-28 00:48:08.205966 | orchestrator | 2026-03-28 00:48:08.205975 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-28 00:48:08.205985 | orchestrator | Saturday 28 March 2026 00:46:36 +0000 (0:00:00.064) 0:01:27.591 ******** 2026-03-28 00:48:08.205994 | orchestrator | 2026-03-28 00:48:08.206004 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-28 00:48:08.206039 | orchestrator | Saturday 28 March 2026 00:46:36 +0000 (0:00:00.064) 0:01:27.656 ******** 2026-03-28 00:48:08.206060 | orchestrator | 2026-03-28 00:48:08.206070 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-28 00:48:08.206080 | orchestrator | Saturday 28 March 2026 00:46:36 +0000 (0:00:00.063) 0:01:27.719 ******** 2026-03-28 00:48:08.206089 | orchestrator | 2026-03-28 00:48:08.206099 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-03-28 00:48:08.206108 | orchestrator | Saturday 28 March 2026 00:46:36 +0000 (0:00:00.096) 0:01:27.816 ******** 2026-03-28 00:48:08.206118 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:48:08.206127 | orchestrator | changed: [testbed-manager] 2026-03-28 00:48:08.206137 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:48:08.206147 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:48:08.206156 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:48:08.206166 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:48:08.206175 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:48:08.206185 | orchestrator | 2026-03-28 00:48:08.206194 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-03-28 00:48:08.206204 | orchestrator | Saturday 28 March 2026 00:47:18 +0000 (0:00:41.471) 0:02:09.288 ******** 2026-03-28 00:48:08.206213 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:48:08.206223 | orchestrator | [0;32026-03-28 00:48:08 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:48:08.206233 | orchestrator | 2026-03-28 00:48:08 | INFO  | Task a2d96b5d-ce83-46ff-9d77-fd6a47fbe4a7 is in state STARTED 2026-03-28 00:48:08.206243 | orchestrator | 2026-03-28 00:48:08 | INFO  | Task 773923c7-45d1-4f6e-b1bb-70979db31a9a is in state STARTED 2026-03-28 00:48:08.206252 | orchestrator | 2026-03-28 00:48:08 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:48:08.206315 | orchestrator | 2026-03-28 00:48:08 | INFO  | Task 260374d0-52fe-4e6f-91b3-da9618631fe5 is in state STARTED 2026-03-28 00:48:08.206325 | orchestrator | 2026-03-28 00:48:08 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:48:08.206335 | orchestrator | 2026-03-28 00:48:08 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:48:08.206391 | orchestrator | 3mchanged: [testbed-node-2] 2026-03-28 00:48:08.206403 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:48:08.206413 | orchestrator | changed: [testbed-manager] 2026-03-28 00:48:08.206423 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:48:08.206432 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:48:08.206442 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:48:08.206451 | orchestrator | 2026-03-28 00:48:08.206461 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-03-28 00:48:08.206471 | orchestrator | Saturday 28 March 2026 00:47:58 +0000 (0:00:40.832) 0:02:50.121 ******** 2026-03-28 00:48:08.206481 | orchestrator | ok: [testbed-manager] 2026-03-28 00:48:08.206490 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:48:08.206500 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:48:08.206510 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:48:08.206519 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:48:08.206529 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:48:08.206538 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:48:08.206546 | orchestrator | 2026-03-28 00:48:08.206559 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-03-28 00:48:08.206567 | orchestrator | Saturday 28 March 2026 00:48:00 +0000 (0:00:01.954) 0:02:52.075 ******** 2026-03-28 00:48:08.206575 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:48:08.206583 | orchestrator | changed: [testbed-manager] 2026-03-28 00:48:08.206591 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:48:08.206598 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:48:08.206606 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:48:08.206614 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:48:08.206622 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:48:08.206635 | orchestrator | 2026-03-28 00:48:08.206643 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:48:08.206653 | orchestrator | testbed-manager : ok=24  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-28 00:48:08.206661 | orchestrator | testbed-node-0 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-28 00:48:08.206669 | orchestrator | testbed-node-1 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-28 00:48:08.206677 | orchestrator | testbed-node-2 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-28 00:48:08.206685 | orchestrator | testbed-node-3 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-28 00:48:08.206692 | orchestrator | testbed-node-4 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-28 00:48:08.206700 | orchestrator | testbed-node-5 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-28 00:48:08.206708 | orchestrator | 2026-03-28 00:48:08.206716 | orchestrator | 2026-03-28 00:48:08.206724 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:48:08.206731 | orchestrator | Saturday 28 March 2026 00:48:05 +0000 (0:00:04.732) 0:02:56.808 ******** 2026-03-28 00:48:08.206739 | orchestrator | =============================================================================== 2026-03-28 00:48:08.206747 | orchestrator | common : Restart fluentd container ------------------------------------- 41.47s 2026-03-28 00:48:08.206755 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 40.83s 2026-03-28 00:48:08.206762 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 7.45s 2026-03-28 00:48:08.206770 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 7.14s 2026-03-28 00:48:08.206778 | orchestrator | common : Copying over config.json files for services -------------------- 6.70s 2026-03-28 00:48:08.206786 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 6.37s 2026-03-28 00:48:08.206794 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 5.65s 2026-03-28 00:48:08.206801 | orchestrator | common : Copying over cron logrotate config file ------------------------ 5.13s 2026-03-28 00:48:08.206809 | orchestrator | service-check-containers : common | Check containers -------------------- 5.05s 2026-03-28 00:48:08.206817 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.80s 2026-03-28 00:48:08.206825 | orchestrator | common : Restart cron container ----------------------------------------- 4.73s 2026-03-28 00:48:08.206832 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 3.37s 2026-03-28 00:48:08.206840 | orchestrator | common : Copying over kolla.target -------------------------------------- 3.30s 2026-03-28 00:48:08.206848 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.19s 2026-03-28 00:48:08.206856 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.99s 2026-03-28 00:48:08.206863 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.94s 2026-03-28 00:48:08.206871 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.90s 2026-03-28 00:48:08.206879 | orchestrator | common : Ensure /var/log/journal exists on EL10 systems ----------------- 2.56s 2026-03-28 00:48:08.206887 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.26s 2026-03-28 00:48:08.206894 | orchestrator | common : Creating log volume -------------------------------------------- 2.06s 2026-03-28 00:48:11.200636 | orchestrator | 2026-03-28 00:48:11 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:48:11.202186 | orchestrator | 2026-03-28 00:48:11 | INFO  | Task a2d96b5d-ce83-46ff-9d77-fd6a47fbe4a7 is in state STARTED 2026-03-28 00:48:11.203901 | orchestrator | 2026-03-28 00:48:11 | INFO  | Task 773923c7-45d1-4f6e-b1bb-70979db31a9a is in state STARTED 2026-03-28 00:48:11.206783 | orchestrator | 2026-03-28 00:48:11 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:48:11.206839 | orchestrator | 2026-03-28 00:48:11 | INFO  | Task 260374d0-52fe-4e6f-91b3-da9618631fe5 is in state STARTED 2026-03-28 00:48:11.206851 | orchestrator | 2026-03-28 00:48:11 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:48:11.206862 | orchestrator | 2026-03-28 00:48:11 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:48:14.252686 | orchestrator | 2026-03-28 00:48:14 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:48:14.253535 | orchestrator | 2026-03-28 00:48:14 | INFO  | Task a2d96b5d-ce83-46ff-9d77-fd6a47fbe4a7 is in state STARTED 2026-03-28 00:48:14.255066 | orchestrator | 2026-03-28 00:48:14 | INFO  | Task 773923c7-45d1-4f6e-b1bb-70979db31a9a is in state STARTED 2026-03-28 00:48:14.258593 | orchestrator | 2026-03-28 00:48:14 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:48:14.259828 | orchestrator | 2026-03-28 00:48:14 | INFO  | Task 260374d0-52fe-4e6f-91b3-da9618631fe5 is in state STARTED 2026-03-28 00:48:14.262188 | orchestrator | 2026-03-28 00:48:14 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:48:14.262235 | orchestrator | 2026-03-28 00:48:14 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:48:17.350167 | orchestrator | 2026-03-28 00:48:17 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:48:17.350238 | orchestrator | 2026-03-28 00:48:17 | INFO  | Task a2d96b5d-ce83-46ff-9d77-fd6a47fbe4a7 is in state STARTED 2026-03-28 00:48:17.350289 | orchestrator | 2026-03-28 00:48:17 | INFO  | Task 773923c7-45d1-4f6e-b1bb-70979db31a9a is in state STARTED 2026-03-28 00:48:17.350294 | orchestrator | 2026-03-28 00:48:17 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:48:17.350299 | orchestrator | 2026-03-28 00:48:17 | INFO  | Task 260374d0-52fe-4e6f-91b3-da9618631fe5 is in state STARTED 2026-03-28 00:48:17.350320 | orchestrator | 2026-03-28 00:48:17 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:48:17.350325 | orchestrator | 2026-03-28 00:48:17 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:48:20.406796 | orchestrator | 2026-03-28 00:48:20 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:48:20.406868 | orchestrator | 2026-03-28 00:48:20 | INFO  | Task a2d96b5d-ce83-46ff-9d77-fd6a47fbe4a7 is in state STARTED 2026-03-28 00:48:20.407697 | orchestrator | 2026-03-28 00:48:20 | INFO  | Task 773923c7-45d1-4f6e-b1bb-70979db31a9a is in state STARTED 2026-03-28 00:48:20.409102 | orchestrator | 2026-03-28 00:48:20 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:48:20.410705 | orchestrator | 2026-03-28 00:48:20 | INFO  | Task 260374d0-52fe-4e6f-91b3-da9618631fe5 is in state STARTED 2026-03-28 00:48:20.412711 | orchestrator | 2026-03-28 00:48:20 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:48:20.412746 | orchestrator | 2026-03-28 00:48:20 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:48:23.510667 | orchestrator | 2026-03-28 00:48:23 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:48:23.510773 | orchestrator | 2026-03-28 00:48:23 | INFO  | Task a2d96b5d-ce83-46ff-9d77-fd6a47fbe4a7 is in state STARTED 2026-03-28 00:48:23.514574 | orchestrator | 2026-03-28 00:48:23 | INFO  | Task 773923c7-45d1-4f6e-b1bb-70979db31a9a is in state STARTED 2026-03-28 00:48:23.517163 | orchestrator | 2026-03-28 00:48:23 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:48:23.518248 | orchestrator | 2026-03-28 00:48:23 | INFO  | Task 260374d0-52fe-4e6f-91b3-da9618631fe5 is in state STARTED 2026-03-28 00:48:23.520347 | orchestrator | 2026-03-28 00:48:23 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:48:23.520389 | orchestrator | 2026-03-28 00:48:23 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:48:26.582866 | orchestrator | 2026-03-28 00:48:26 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:48:26.584754 | orchestrator | 2026-03-28 00:48:26 | INFO  | Task a2d96b5d-ce83-46ff-9d77-fd6a47fbe4a7 is in state STARTED 2026-03-28 00:48:26.585462 | orchestrator | 2026-03-28 00:48:26 | INFO  | Task 773923c7-45d1-4f6e-b1bb-70979db31a9a is in state STARTED 2026-03-28 00:48:26.589767 | orchestrator | 2026-03-28 00:48:26 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:48:26.591060 | orchestrator | 2026-03-28 00:48:26 | INFO  | Task 260374d0-52fe-4e6f-91b3-da9618631fe5 is in state STARTED 2026-03-28 00:48:26.592672 | orchestrator | 2026-03-28 00:48:26 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:48:26.592712 | orchestrator | 2026-03-28 00:48:26 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:48:29.642641 | orchestrator | 2026-03-28 00:48:29 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:48:29.643565 | orchestrator | 2026-03-28 00:48:29 | INFO  | Task a2d96b5d-ce83-46ff-9d77-fd6a47fbe4a7 is in state STARTED 2026-03-28 00:48:29.645816 | orchestrator | 2026-03-28 00:48:29 | INFO  | Task 773923c7-45d1-4f6e-b1bb-70979db31a9a is in state STARTED 2026-03-28 00:48:29.651676 | orchestrator | 2026-03-28 00:48:29.651754 | orchestrator | 2026-03-28 00:48:29.651766 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 00:48:29.651774 | orchestrator | 2026-03-28 00:48:29.651778 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 00:48:29.651783 | orchestrator | Saturday 28 March 2026 00:48:09 +0000 (0:00:00.375) 0:00:00.375 ******** 2026-03-28 00:48:29.651788 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:48:29.651793 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:48:29.651798 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:48:29.651802 | orchestrator | 2026-03-28 00:48:29.651807 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 00:48:29.651812 | orchestrator | Saturday 28 March 2026 00:48:10 +0000 (0:00:00.772) 0:00:01.148 ******** 2026-03-28 00:48:29.651817 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-03-28 00:48:29.651822 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-03-28 00:48:29.651826 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-03-28 00:48:29.651830 | orchestrator | 2026-03-28 00:48:29.651835 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-03-28 00:48:29.651839 | orchestrator | 2026-03-28 00:48:29.651843 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-03-28 00:48:29.651847 | orchestrator | Saturday 28 March 2026 00:48:11 +0000 (0:00:00.724) 0:00:01.873 ******** 2026-03-28 00:48:29.651852 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:48:29.651873 | orchestrator | 2026-03-28 00:48:29.651878 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-03-28 00:48:29.651882 | orchestrator | Saturday 28 March 2026 00:48:12 +0000 (0:00:00.958) 0:00:02.832 ******** 2026-03-28 00:48:29.651887 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-28 00:48:29.651892 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-28 00:48:29.651896 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-28 00:48:29.651901 | orchestrator | 2026-03-28 00:48:29.651905 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-03-28 00:48:29.651909 | orchestrator | Saturday 28 March 2026 00:48:14 +0000 (0:00:01.848) 0:00:04.681 ******** 2026-03-28 00:48:29.651914 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-28 00:48:29.651918 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-28 00:48:29.651922 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-28 00:48:29.651927 | orchestrator | 2026-03-28 00:48:29.651931 | orchestrator | TASK [service-check-containers : memcached | Check containers] ***************** 2026-03-28 00:48:29.651935 | orchestrator | Saturday 28 March 2026 00:48:16 +0000 (0:00:02.483) 0:00:07.164 ******** 2026-03-28 00:48:29.651943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-28 00:48:29.651950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-28 00:48:29.651979 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-28 00:48:29.651984 | orchestrator | 2026-03-28 00:48:29.651989 | orchestrator | TASK [service-check-containers : memcached | Notify handlers to restart containers] *** 2026-03-28 00:48:29.651993 | orchestrator | Saturday 28 March 2026 00:48:19 +0000 (0:00:02.474) 0:00:09.639 ******** 2026-03-28 00:48:29.651998 | orchestrator | changed: [testbed-node-0] => { 2026-03-28 00:48:29.652002 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 00:48:29.652007 | orchestrator | } 2026-03-28 00:48:29.652011 | orchestrator | changed: [testbed-node-2] => { 2026-03-28 00:48:29.652020 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 00:48:29.652025 | orchestrator | } 2026-03-28 00:48:29.652029 | orchestrator | changed: [testbed-node-1] => { 2026-03-28 00:48:29.652033 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 00:48:29.652037 | orchestrator | } 2026-03-28 00:48:29.652042 | orchestrator | 2026-03-28 00:48:29.652046 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-28 00:48:29.652050 | orchestrator | Saturday 28 March 2026 00:48:20 +0000 (0:00:01.520) 0:00:11.159 ******** 2026-03-28 00:48:29.652055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-28 00:48:29.652060 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:48:29.652064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-28 00:48:29.652069 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:48:29.652073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-28 00:48:29.652078 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:48:29.652082 | orchestrator | 2026-03-28 00:48:29.652086 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-03-28 00:48:29.652091 | orchestrator | Saturday 28 March 2026 00:48:24 +0000 (0:00:04.130) 0:00:15.289 ******** 2026-03-28 00:48:29.652095 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:48:29.652099 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:48:29.652104 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:48:29.652108 | orchestrator | 2026-03-28 00:48:29.652112 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:48:29.652118 | orchestrator | testbed-node-0 : ok=8  changed=5  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 00:48:29.652127 | orchestrator | testbed-node-1 : ok=8  changed=5  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 00:48:29.652135 | orchestrator | testbed-node-2 : ok=8  changed=5  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 00:48:29.652139 | orchestrator | 2026-03-28 00:48:29.652144 | orchestrator | 2026-03-28 00:48:29.652148 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:48:29.652152 | orchestrator | Saturday 28 March 2026 00:48:28 +0000 (0:00:03.480) 0:00:18.770 ******** 2026-03-28 00:48:29.652162 | orchestrator | =============================================================================== 2026-03-28 00:48:29.652167 | orchestrator | service-check-containers : Include tasks -------------------------------- 4.13s 2026-03-28 00:48:29.652172 | orchestrator | memcached : Restart memcached container --------------------------------- 3.48s 2026-03-28 00:48:29.652176 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.48s 2026-03-28 00:48:29.652180 | orchestrator | service-check-containers : memcached | Check containers ----------------- 2.47s 2026-03-28 00:48:29.652184 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.85s 2026-03-28 00:48:29.652189 | orchestrator | service-check-containers : memcached | Notify handlers to restart containers --- 1.52s 2026-03-28 00:48:29.652193 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.96s 2026-03-28 00:48:29.652197 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.77s 2026-03-28 00:48:29.652201 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.72s 2026-03-28 00:48:29.652206 | orchestrator | 2026-03-28 00:48:29 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:48:29.652210 | orchestrator | 2026-03-28 00:48:29 | INFO  | Task 260374d0-52fe-4e6f-91b3-da9618631fe5 is in state SUCCESS 2026-03-28 00:48:29.652215 | orchestrator | 2026-03-28 00:48:29 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:48:29.652219 | orchestrator | 2026-03-28 00:48:29 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:48:32.704865 | orchestrator | 2026-03-28 00:48:32 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:48:32.710273 | orchestrator | 2026-03-28 00:48:32 | INFO  | Task a2d96b5d-ce83-46ff-9d77-fd6a47fbe4a7 is in state STARTED 2026-03-28 00:48:32.712736 | orchestrator | 2026-03-28 00:48:32 | INFO  | Task 773923c7-45d1-4f6e-b1bb-70979db31a9a is in state STARTED 2026-03-28 00:48:32.716164 | orchestrator | 2026-03-28 00:48:32 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:48:32.719750 | orchestrator | 2026-03-28 00:48:32 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:48:32.724338 | orchestrator | 2026-03-28 00:48:32 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:48:32.724601 | orchestrator | 2026-03-28 00:48:32 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:48:35.779765 | orchestrator | 2026-03-28 00:48:35 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:48:35.780106 | orchestrator | 2026-03-28 00:48:35 | INFO  | Task a2d96b5d-ce83-46ff-9d77-fd6a47fbe4a7 is in state STARTED 2026-03-28 00:48:35.781482 | orchestrator | 2026-03-28 00:48:35 | INFO  | Task 773923c7-45d1-4f6e-b1bb-70979db31a9a is in state STARTED 2026-03-28 00:48:35.782284 | orchestrator | 2026-03-28 00:48:35 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:48:35.783409 | orchestrator | 2026-03-28 00:48:35 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:48:35.787913 | orchestrator | 2026-03-28 00:48:35 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:48:35.787971 | orchestrator | 2026-03-28 00:48:35 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:48:38.839836 | orchestrator | 2026-03-28 00:48:38 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:48:38.841851 | orchestrator | 2026-03-28 00:48:38 | INFO  | Task a2d96b5d-ce83-46ff-9d77-fd6a47fbe4a7 is in state STARTED 2026-03-28 00:48:38.845102 | orchestrator | 2026-03-28 00:48:38 | INFO  | Task 773923c7-45d1-4f6e-b1bb-70979db31a9a is in state STARTED 2026-03-28 00:48:38.846108 | orchestrator | 2026-03-28 00:48:38 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:48:38.848179 | orchestrator | 2026-03-28 00:48:38 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:48:38.852035 | orchestrator | 2026-03-28 00:48:38 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:48:38.852096 | orchestrator | 2026-03-28 00:48:38 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:48:41.922267 | orchestrator | 2026-03-28 00:48:41 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:48:41.922608 | orchestrator | 2026-03-28 00:48:41 | INFO  | Task a2d96b5d-ce83-46ff-9d77-fd6a47fbe4a7 is in state STARTED 2026-03-28 00:48:41.924466 | orchestrator | 2026-03-28 00:48:41 | INFO  | Task 773923c7-45d1-4f6e-b1bb-70979db31a9a is in state STARTED 2026-03-28 00:48:41.924502 | orchestrator | 2026-03-28 00:48:41 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:48:41.925642 | orchestrator | 2026-03-28 00:48:41 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:48:41.926866 | orchestrator | 2026-03-28 00:48:41 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:48:41.926942 | orchestrator | 2026-03-28 00:48:41 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:48:45.016592 | orchestrator | 2026-03-28 00:48:45 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:48:45.016839 | orchestrator | 2026-03-28 00:48:45 | INFO  | Task a2d96b5d-ce83-46ff-9d77-fd6a47fbe4a7 is in state STARTED 2026-03-28 00:48:45.019896 | orchestrator | 2026-03-28 00:48:45 | INFO  | Task 773923c7-45d1-4f6e-b1bb-70979db31a9a is in state STARTED 2026-03-28 00:48:45.020282 | orchestrator | 2026-03-28 00:48:45 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:48:45.021324 | orchestrator | 2026-03-28 00:48:45 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:48:45.022173 | orchestrator | 2026-03-28 00:48:45 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:48:45.022248 | orchestrator | 2026-03-28 00:48:45 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:48:48.128923 | orchestrator | 2026-03-28 00:48:48 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:48:48.129022 | orchestrator | 2026-03-28 00:48:48 | INFO  | Task a2d96b5d-ce83-46ff-9d77-fd6a47fbe4a7 is in state STARTED 2026-03-28 00:48:48.131955 | orchestrator | 2026-03-28 00:48:48 | INFO  | Task 773923c7-45d1-4f6e-b1bb-70979db31a9a is in state STARTED 2026-03-28 00:48:48.132012 | orchestrator | 2026-03-28 00:48:48 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:48:48.134462 | orchestrator | 2026-03-28 00:48:48 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:48:48.134943 | orchestrator | 2026-03-28 00:48:48 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:48:48.135194 | orchestrator | 2026-03-28 00:48:48 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:48:51.257719 | orchestrator | 2026-03-28 00:48:51 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:48:51.258796 | orchestrator | 2026-03-28 00:48:51 | INFO  | Task a2d96b5d-ce83-46ff-9d77-fd6a47fbe4a7 is in state STARTED 2026-03-28 00:48:51.261193 | orchestrator | 2026-03-28 00:48:51 | INFO  | Task 773923c7-45d1-4f6e-b1bb-70979db31a9a is in state STARTED 2026-03-28 00:48:51.263491 | orchestrator | 2026-03-28 00:48:51 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:48:51.264346 | orchestrator | 2026-03-28 00:48:51 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:48:51.265471 | orchestrator | 2026-03-28 00:48:51 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:48:51.265524 | orchestrator | 2026-03-28 00:48:51 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:48:54.325710 | orchestrator | 2026-03-28 00:48:54 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:48:54.328792 | orchestrator | 2026-03-28 00:48:54 | INFO  | Task a2d96b5d-ce83-46ff-9d77-fd6a47fbe4a7 is in state STARTED 2026-03-28 00:48:54.329949 | orchestrator | 2026-03-28 00:48:54 | INFO  | Task 773923c7-45d1-4f6e-b1bb-70979db31a9a is in state STARTED 2026-03-28 00:48:54.330810 | orchestrator | 2026-03-28 00:48:54 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:48:54.331741 | orchestrator | 2026-03-28 00:48:54 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:48:54.333531 | orchestrator | 2026-03-28 00:48:54 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:48:54.333606 | orchestrator | 2026-03-28 00:48:54 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:48:57.387958 | orchestrator | 2026-03-28 00:48:57 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:48:57.390503 | orchestrator | 2026-03-28 00:48:57 | INFO  | Task a2d96b5d-ce83-46ff-9d77-fd6a47fbe4a7 is in state STARTED 2026-03-28 00:48:57.394065 | orchestrator | 2026-03-28 00:48:57 | INFO  | Task 773923c7-45d1-4f6e-b1bb-70979db31a9a is in state SUCCESS 2026-03-28 00:48:57.396700 | orchestrator | 2026-03-28 00:48:57.396750 | orchestrator | 2026-03-28 00:48:57.396759 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 00:48:57.396776 | orchestrator | 2026-03-28 00:48:57.396782 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 00:48:57.396787 | orchestrator | Saturday 28 March 2026 00:48:10 +0000 (0:00:00.510) 0:00:00.510 ******** 2026-03-28 00:48:57.396792 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:48:57.396797 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:48:57.396802 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:48:57.396806 | orchestrator | 2026-03-28 00:48:57.396811 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 00:48:57.396815 | orchestrator | Saturday 28 March 2026 00:48:10 +0000 (0:00:00.752) 0:00:01.263 ******** 2026-03-28 00:48:57.396820 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-03-28 00:48:57.396824 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-03-28 00:48:57.396828 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-03-28 00:48:57.396832 | orchestrator | 2026-03-28 00:48:57.396836 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-03-28 00:48:57.396839 | orchestrator | 2026-03-28 00:48:57.396843 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-03-28 00:48:57.396847 | orchestrator | Saturday 28 March 2026 00:48:11 +0000 (0:00:00.386) 0:00:01.649 ******** 2026-03-28 00:48:57.396851 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:48:57.396876 | orchestrator | 2026-03-28 00:48:57.396880 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-03-28 00:48:57.396884 | orchestrator | Saturday 28 March 2026 00:48:12 +0000 (0:00:01.212) 0:00:02.862 ******** 2026-03-28 00:48:57.396890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-28 00:48:57.396900 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-28 00:48:57.396904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-28 00:48:57.396909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-28 00:48:57.396934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-28 00:48:57.396938 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-28 00:48:57.396946 | orchestrator | 2026-03-28 00:48:57.396950 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-03-28 00:48:57.396954 | orchestrator | Saturday 28 March 2026 00:48:15 +0000 (0:00:02.720) 0:00:05.583 ******** 2026-03-28 00:48:57.396958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-28 00:48:57.396962 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-28 00:48:57.396966 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-28 00:48:57.396970 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-28 00:48:57.396977 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-28 00:48:57.396989 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-28 00:48:57.397000 | orchestrator | 2026-03-28 00:48:57.397006 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-03-28 00:48:57.397013 | orchestrator | Saturday 28 March 2026 00:48:18 +0000 (0:00:03.727) 0:00:09.310 ******** 2026-03-28 00:48:57.397017 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-28 00:48:57.397021 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-28 00:48:57.397026 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-28 00:48:57.397029 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-28 00:48:57.397036 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-28 00:48:57.397045 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-28 00:48:57.397053 | orchestrator | 2026-03-28 00:48:57.397057 | orchestrator | TASK [service-check-containers : redis | Check containers] ********************* 2026-03-28 00:48:57.397061 | orchestrator | Saturday 28 March 2026 00:48:25 +0000 (0:00:06.409) 0:00:15.719 ******** 2026-03-28 00:48:57.397065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-28 00:48:57.397068 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-28 00:48:57.397072 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-28 00:48:57.397076 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-28 00:48:57.397080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-28 00:48:57.397091 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-28 00:48:57.397100 | orchestrator | 2026-03-28 00:48:57.397104 | orchestrator | TASK [service-check-containers : redis | Notify handlers to restart containers] *** 2026-03-28 00:48:57.397108 | orchestrator | Saturday 28 March 2026 00:48:28 +0000 (0:00:02.951) 0:00:18.671 ******** 2026-03-28 00:48:57.397112 | orchestrator | changed: [testbed-node-0] => { 2026-03-28 00:48:57.397117 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 00:48:57.397121 | orchestrator | } 2026-03-28 00:48:57.397125 | orchestrator | changed: [testbed-node-1] => { 2026-03-28 00:48:57.397129 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 00:48:57.397133 | orchestrator | } 2026-03-28 00:48:57.397138 | orchestrator | changed: [testbed-node-2] => { 2026-03-28 00:48:57.397144 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 00:48:57.397150 | orchestrator | } 2026-03-28 00:48:57.397156 | orchestrator | 2026-03-28 00:48:57.397162 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-28 00:48:57.397167 | orchestrator | Saturday 28 March 2026 00:48:30 +0000 (0:00:01.882) 0:00:20.553 ******** 2026-03-28 00:48:57.397173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-03-28 00:48:57.397179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-03-28 00:48:57.397185 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:48:57.397231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-03-28 00:48:57.397241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-03-28 00:48:57.397247 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:48:57.397260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-03-28 00:48:57.397271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-03-28 00:48:57.397277 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:48:57.397281 | orchestrator | 2026-03-28 00:48:57.397287 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-28 00:48:57.397293 | orchestrator | Saturday 28 March 2026 00:48:31 +0000 (0:00:01.513) 0:00:22.067 ******** 2026-03-28 00:48:57.397299 | orchestrator | 2026-03-28 00:48:57.397305 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-28 00:48:57.397310 | orchestrator | Saturday 28 March 2026 00:48:31 +0000 (0:00:00.124) 0:00:22.192 ******** 2026-03-28 00:48:57.397316 | orchestrator | 2026-03-28 00:48:57.397322 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-28 00:48:57.397328 | orchestrator | Saturday 28 March 2026 00:48:31 +0000 (0:00:00.087) 0:00:22.279 ******** 2026-03-28 00:48:57.397334 | orchestrator | 2026-03-28 00:48:57.397340 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-03-28 00:48:57.397345 | orchestrator | Saturday 28 March 2026 00:48:31 +0000 (0:00:00.099) 0:00:22.379 ******** 2026-03-28 00:48:57.397351 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:48:57.397357 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:48:57.397364 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:48:57.397370 | orchestrator | 2026-03-28 00:48:57.397376 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-03-28 00:48:57.397382 | orchestrator | Saturday 28 March 2026 00:48:42 +0000 (0:00:10.766) 0:00:33.146 ******** 2026-03-28 00:48:57.397408 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:48:57.397416 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:48:57.397422 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:48:57.397428 | orchestrator | 2026-03-28 00:48:57.397434 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:48:57.397442 | orchestrator | testbed-node-0 : ok=10  changed=7  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 00:48:57.397450 | orchestrator | testbed-node-1 : ok=10  changed=7  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 00:48:57.397455 | orchestrator | testbed-node-2 : ok=10  changed=7  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 00:48:57.397462 | orchestrator | 2026-03-28 00:48:57.397468 | orchestrator | 2026-03-28 00:48:57.397474 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:48:57.397480 | orchestrator | Saturday 28 March 2026 00:48:53 +0000 (0:00:10.844) 0:00:43.992 ******** 2026-03-28 00:48:57.397487 | orchestrator | =============================================================================== 2026-03-28 00:48:57.397499 | orchestrator | redis : Restart redis-sentinel container ------------------------------- 10.85s 2026-03-28 00:48:57.397511 | orchestrator | redis : Restart redis container ---------------------------------------- 10.77s 2026-03-28 00:48:57.397517 | orchestrator | redis : Copying over redis config files --------------------------------- 6.41s 2026-03-28 00:48:57.397523 | orchestrator | redis : Copying over default config.json files -------------------------- 3.73s 2026-03-28 00:48:57.397529 | orchestrator | service-check-containers : redis | Check containers --------------------- 2.95s 2026-03-28 00:48:57.397535 | orchestrator | redis : Ensuring config directories exist ------------------------------- 2.72s 2026-03-28 00:48:57.397541 | orchestrator | service-check-containers : redis | Notify handlers to restart containers --- 1.88s 2026-03-28 00:48:57.397547 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.51s 2026-03-28 00:48:57.397553 | orchestrator | redis : include_tasks --------------------------------------------------- 1.21s 2026-03-28 00:48:57.397559 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.75s 2026-03-28 00:48:57.397565 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.39s 2026-03-28 00:48:57.397571 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.31s 2026-03-28 00:48:57.399106 | orchestrator | 2026-03-28 00:48:57 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:48:57.401536 | orchestrator | 2026-03-28 00:48:57 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:48:57.404130 | orchestrator | 2026-03-28 00:48:57 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:48:57.404445 | orchestrator | 2026-03-28 00:48:57 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:49:00.455742 | orchestrator | 2026-03-28 00:49:00 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:49:00.457262 | orchestrator | 2026-03-28 00:49:00 | INFO  | Task a2d96b5d-ce83-46ff-9d77-fd6a47fbe4a7 is in state STARTED 2026-03-28 00:49:00.458586 | orchestrator | 2026-03-28 00:49:00 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:49:00.460068 | orchestrator | 2026-03-28 00:49:00 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:49:00.460937 | orchestrator | 2026-03-28 00:49:00 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:49:00.460975 | orchestrator | 2026-03-28 00:49:00 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:49:03.512337 | orchestrator | 2026-03-28 00:49:03 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:49:03.513215 | orchestrator | 2026-03-28 00:49:03 | INFO  | Task a2d96b5d-ce83-46ff-9d77-fd6a47fbe4a7 is in state STARTED 2026-03-28 00:49:03.514501 | orchestrator | 2026-03-28 00:49:03 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:49:03.516904 | orchestrator | 2026-03-28 00:49:03 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:49:03.517905 | orchestrator | 2026-03-28 00:49:03 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:49:03.518006 | orchestrator | 2026-03-28 00:49:03 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:49:06.566876 | orchestrator | 2026-03-28 00:49:06 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:49:06.569082 | orchestrator | 2026-03-28 00:49:06 | INFO  | Task a2d96b5d-ce83-46ff-9d77-fd6a47fbe4a7 is in state STARTED 2026-03-28 00:49:06.573360 | orchestrator | 2026-03-28 00:49:06 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:49:06.574719 | orchestrator | 2026-03-28 00:49:06 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:49:06.576514 | orchestrator | 2026-03-28 00:49:06 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:49:06.577671 | orchestrator | 2026-03-28 00:49:06 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:49:09.654359 | orchestrator | 2026-03-28 00:49:09 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:49:09.656651 | orchestrator | 2026-03-28 00:49:09 | INFO  | Task a2d96b5d-ce83-46ff-9d77-fd6a47fbe4a7 is in state STARTED 2026-03-28 00:49:09.656777 | orchestrator | 2026-03-28 00:49:09 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:49:09.657794 | orchestrator | 2026-03-28 00:49:09 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:49:09.660231 | orchestrator | 2026-03-28 00:49:09 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:49:09.660348 | orchestrator | 2026-03-28 00:49:09 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:49:12.709657 | orchestrator | 2026-03-28 00:49:12 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:49:12.711624 | orchestrator | 2026-03-28 00:49:12 | INFO  | Task a2d96b5d-ce83-46ff-9d77-fd6a47fbe4a7 is in state STARTED 2026-03-28 00:49:12.713411 | orchestrator | 2026-03-28 00:49:12 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:49:12.715000 | orchestrator | 2026-03-28 00:49:12 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:49:12.717386 | orchestrator | 2026-03-28 00:49:12 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:49:12.717422 | orchestrator | 2026-03-28 00:49:12 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:49:15.758971 | orchestrator | 2026-03-28 00:49:15 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:49:15.759856 | orchestrator | 2026-03-28 00:49:15 | INFO  | Task a2d96b5d-ce83-46ff-9d77-fd6a47fbe4a7 is in state STARTED 2026-03-28 00:49:15.761655 | orchestrator | 2026-03-28 00:49:15 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:49:15.762772 | orchestrator | 2026-03-28 00:49:15 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:49:15.766392 | orchestrator | 2026-03-28 00:49:15 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:49:15.766476 | orchestrator | 2026-03-28 00:49:15 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:49:18.823983 | orchestrator | 2026-03-28 00:49:18 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:49:18.826860 | orchestrator | 2026-03-28 00:49:18 | INFO  | Task a2d96b5d-ce83-46ff-9d77-fd6a47fbe4a7 is in state STARTED 2026-03-28 00:49:18.828192 | orchestrator | 2026-03-28 00:49:18 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:49:18.831794 | orchestrator | 2026-03-28 00:49:18 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:49:18.834333 | orchestrator | 2026-03-28 00:49:18 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:49:18.834400 | orchestrator | 2026-03-28 00:49:18 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:49:21.888549 | orchestrator | 2026-03-28 00:49:21 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:49:21.891612 | orchestrator | 2026-03-28 00:49:21 | INFO  | Task a2d96b5d-ce83-46ff-9d77-fd6a47fbe4a7 is in state STARTED 2026-03-28 00:49:21.892769 | orchestrator | 2026-03-28 00:49:21 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:49:21.894668 | orchestrator | 2026-03-28 00:49:21 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:49:21.896861 | orchestrator | 2026-03-28 00:49:21 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:49:21.896899 | orchestrator | 2026-03-28 00:49:21 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:49:24.936532 | orchestrator | 2026-03-28 00:49:24 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:49:24.938502 | orchestrator | 2026-03-28 00:49:24 | INFO  | Task a2d96b5d-ce83-46ff-9d77-fd6a47fbe4a7 is in state STARTED 2026-03-28 00:49:24.939282 | orchestrator | 2026-03-28 00:49:24 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:49:24.941337 | orchestrator | 2026-03-28 00:49:24 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:49:24.942363 | orchestrator | 2026-03-28 00:49:24 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:49:24.942389 | orchestrator | 2026-03-28 00:49:24 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:49:27.986317 | orchestrator | 2026-03-28 00:49:27 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:49:27.987463 | orchestrator | 2026-03-28 00:49:27 | INFO  | Task a2d96b5d-ce83-46ff-9d77-fd6a47fbe4a7 is in state STARTED 2026-03-28 00:49:27.989436 | orchestrator | 2026-03-28 00:49:27 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:49:27.990765 | orchestrator | 2026-03-28 00:49:27 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:49:27.995884 | orchestrator | 2026-03-28 00:49:27 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:49:27.995935 | orchestrator | 2026-03-28 00:49:27 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:49:31.078784 | orchestrator | 2026-03-28 00:49:31 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:49:31.078837 | orchestrator | 2026-03-28 00:49:31 | INFO  | Task a2d96b5d-ce83-46ff-9d77-fd6a47fbe4a7 is in state STARTED 2026-03-28 00:49:31.078843 | orchestrator | 2026-03-28 00:49:31 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:49:31.078848 | orchestrator | 2026-03-28 00:49:31 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:49:31.078852 | orchestrator | 2026-03-28 00:49:31 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:49:31.078856 | orchestrator | 2026-03-28 00:49:31 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:49:34.253749 | orchestrator | 2026-03-28 00:49:34 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:49:34.254532 | orchestrator | 2026-03-28 00:49:34 | INFO  | Task a2d96b5d-ce83-46ff-9d77-fd6a47fbe4a7 is in state STARTED 2026-03-28 00:49:34.257444 | orchestrator | 2026-03-28 00:49:34 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:49:34.259428 | orchestrator | 2026-03-28 00:49:34 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:49:34.262317 | orchestrator | 2026-03-28 00:49:34 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:49:34.262356 | orchestrator | 2026-03-28 00:49:34 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:49:37.304193 | orchestrator | 2026-03-28 00:49:37 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:49:37.305441 | orchestrator | 2026-03-28 00:49:37 | INFO  | Task a2d96b5d-ce83-46ff-9d77-fd6a47fbe4a7 is in state STARTED 2026-03-28 00:49:37.306979 | orchestrator | 2026-03-28 00:49:37 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:49:37.308897 | orchestrator | 2026-03-28 00:49:37 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:49:37.309826 | orchestrator | 2026-03-28 00:49:37 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:49:37.310080 | orchestrator | 2026-03-28 00:49:37 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:49:40.366264 | orchestrator | 2026-03-28 00:49:40 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:49:40.368679 | orchestrator | 2026-03-28 00:49:40 | INFO  | Task a2d96b5d-ce83-46ff-9d77-fd6a47fbe4a7 is in state STARTED 2026-03-28 00:49:40.370490 | orchestrator | 2026-03-28 00:49:40 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:49:40.371917 | orchestrator | 2026-03-28 00:49:40 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:49:40.376525 | orchestrator | 2026-03-28 00:49:40 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:49:40.376606 | orchestrator | 2026-03-28 00:49:40 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:49:43.482004 | orchestrator | 2026-03-28 00:49:43.482166 | orchestrator | 2026-03-28 00:49:43.482180 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 00:49:43.482189 | orchestrator | 2026-03-28 00:49:43.482196 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 00:49:43.482203 | orchestrator | Saturday 28 March 2026 00:48:10 +0000 (0:00:00.688) 0:00:00.688 ******** 2026-03-28 00:49:43.482209 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:49:43.482220 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:49:43.482228 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:49:43.482238 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:49:43.482245 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:49:43.482251 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:49:43.482259 | orchestrator | 2026-03-28 00:49:43.482268 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 00:49:43.482278 | orchestrator | Saturday 28 March 2026 00:48:11 +0000 (0:00:00.932) 0:00:01.621 ******** 2026-03-28 00:49:43.482286 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-28 00:49:43.482294 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-28 00:49:43.482302 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-28 00:49:43.482312 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-28 00:49:43.482320 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-28 00:49:43.482357 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-28 00:49:43.482365 | orchestrator | 2026-03-28 00:49:43.482372 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-03-28 00:49:43.482380 | orchestrator | 2026-03-28 00:49:43.482389 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-03-28 00:49:43.482397 | orchestrator | Saturday 28 March 2026 00:48:13 +0000 (0:00:01.633) 0:00:03.254 ******** 2026-03-28 00:49:43.482406 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:49:43.482447 | orchestrator | 2026-03-28 00:49:43.482454 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-28 00:49:43.482460 | orchestrator | Saturday 28 March 2026 00:48:15 +0000 (0:00:02.535) 0:00:05.790 ******** 2026-03-28 00:49:43.482467 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-28 00:49:43.482473 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-28 00:49:43.482479 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-28 00:49:43.482484 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-28 00:49:43.482490 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-28 00:49:43.482495 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-28 00:49:43.482502 | orchestrator | 2026-03-28 00:49:43.482508 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-28 00:49:43.482513 | orchestrator | Saturday 28 March 2026 00:48:19 +0000 (0:00:03.440) 0:00:09.230 ******** 2026-03-28 00:49:43.482521 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-28 00:49:43.482530 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-28 00:49:43.482538 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-28 00:49:43.482558 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-28 00:49:43.482564 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-28 00:49:43.482570 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-28 00:49:43.482576 | orchestrator | 2026-03-28 00:49:43.482581 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-28 00:49:43.482587 | orchestrator | Saturday 28 March 2026 00:48:24 +0000 (0:00:04.772) 0:00:14.003 ******** 2026-03-28 00:49:43.482593 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-03-28 00:49:43.482598 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-03-28 00:49:43.482604 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:49:43.482611 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-03-28 00:49:43.482617 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:49:43.482623 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-03-28 00:49:43.482629 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:49:43.482634 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-03-28 00:49:43.482639 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:49:43.482645 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:49:43.482650 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-03-28 00:49:43.482656 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:49:43.482661 | orchestrator | 2026-03-28 00:49:43.482667 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-03-28 00:49:43.482673 | orchestrator | Saturday 28 March 2026 00:48:26 +0000 (0:00:02.648) 0:00:16.652 ******** 2026-03-28 00:49:43.482679 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:49:43.482685 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:49:43.482690 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:49:43.482696 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:49:43.482702 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:49:43.482708 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:49:43.482714 | orchestrator | 2026-03-28 00:49:43.482720 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-03-28 00:49:43.482726 | orchestrator | Saturday 28 March 2026 00:48:27 +0000 (0:00:01.189) 0:00:17.841 ******** 2026-03-28 00:49:43.482756 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 00:49:43.482788 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 00:49:43.482798 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 00:49:43.482812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 00:49:43.482822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 00:49:43.482828 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 00:49:43.482845 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 00:49:43.482861 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 00:49:43.482869 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 00:49:43.482879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 00:49:43.482887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 00:49:43.482903 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 00:49:43.482917 | orchestrator | 2026-03-28 00:49:43.482924 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-03-28 00:49:43.482933 | orchestrator | Saturday 28 March 2026 00:48:31 +0000 (0:00:03.300) 0:00:21.142 ******** 2026-03-28 00:49:43.482943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 00:49:43.482952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 00:49:43.482960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 00:49:43.482972 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 00:49:43.482982 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 00:49:43.483001 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 00:49:43.483016 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 00:49:43.483023 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 00:49:43.483033 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 00:49:43.483042 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 00:49:43.483052 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 00:49:43.483087 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 00:49:43.483094 | orchestrator | 2026-03-28 00:49:43.483101 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-03-28 00:49:43.483107 | orchestrator | Saturday 28 March 2026 00:48:36 +0000 (0:00:05.571) 0:00:26.713 ******** 2026-03-28 00:49:43.483116 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:49:43.483263 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:49:43.483307 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:49:43.483314 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:49:43.483318 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:49:43.483322 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:49:43.483326 | orchestrator | 2026-03-28 00:49:43.483330 | orchestrator | TASK [service-check-containers : openvswitch | Check containers] *************** 2026-03-28 00:49:43.483334 | orchestrator | Saturday 28 March 2026 00:48:39 +0000 (0:00:02.167) 0:00:28.881 ******** 2026-03-28 00:49:43.483339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 00:49:43.483344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 00:49:43.483354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 00:49:43.483365 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 00:49:43.483375 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 00:49:43.483380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 00:49:43.483384 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 00:49:43.483396 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 00:49:43.483400 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-28 00:49:43.483408 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 00:49:43.483416 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 00:49:43.483420 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-28 00:49:43.483424 | orchestrator | 2026-03-28 00:49:43.483428 | orchestrator | TASK [service-check-containers : openvswitch | Notify handlers to restart containers] *** 2026-03-28 00:49:43.483433 | orchestrator | Saturday 28 March 2026 00:48:43 +0000 (0:00:04.752) 0:00:33.633 ******** 2026-03-28 00:49:43.483436 | orchestrator | changed: [testbed-node-0] => { 2026-03-28 00:49:43.483440 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 00:49:43.483444 | orchestrator | } 2026-03-28 00:49:43.483448 | orchestrator | changed: [testbed-node-1] => { 2026-03-28 00:49:43.483452 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 00:49:43.483456 | orchestrator | } 2026-03-28 00:49:43.483460 | orchestrator | changed: [testbed-node-2] => { 2026-03-28 00:49:43.483464 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 00:49:43.483467 | orchestrator | } 2026-03-28 00:49:43.483471 | orchestrator | changed: [testbed-node-3] => { 2026-03-28 00:49:43.483475 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 00:49:43.483478 | orchestrator | } 2026-03-28 00:49:43.483482 | orchestrator | changed: [testbed-node-4] => { 2026-03-28 00:49:43.483486 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 00:49:43.483489 | orchestrator | } 2026-03-28 00:49:43.483493 | orchestrator | changed: [testbed-node-5] => { 2026-03-28 00:49:43.483497 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 00:49:43.483500 | orchestrator | } 2026-03-28 00:49:43.483504 | orchestrator | 2026-03-28 00:49:43.483508 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-28 00:49:43.483512 | orchestrator | Saturday 28 March 2026 00:48:45 +0000 (0:00:01.768) 0:00:35.402 ******** 2026-03-28 00:49:43.483521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-03-28 00:49:43.483526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-03-28 00:49:43.483530 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:49:43.483538 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:2026-03-28 00:49:43 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:49:43.483542 | orchestrator | 2026-03-28 00:49:43 | INFO  | Task a2d96b5d-ce83-46ff-9d77-fd6a47fbe4a7 is in state SUCCESS 2026-03-28 00:49:43.483546 | orchestrator | 2026-03-28 00:49:43 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:49:43.483550 | orchestrator | 2026-03-28 00:49:43 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:49:43.483553 | orchestrator | 2026-03-28 00:49:43 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:49:43.483557 | orchestrator | 2026-03-28 00:49:43 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:49:43.483561 | orchestrator | 2026-03-28 00:49:43 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:49:43.483565 | orchestrator | /var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-03-28 00:49:43.483570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-03-28 00:49:43.483574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-03-28 00:49:43.483582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-03-28 00:49:43.483586 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:49:43.483590 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:49:43.484003 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-03-28 00:49:43.484023 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-03-28 00:49:43.484028 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:49:43.484033 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-03-28 00:49:43.484037 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-03-28 00:49:43.484049 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-03-28 00:49:43.484053 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:49:43.484057 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-03-28 00:49:43.484061 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:49:43.484065 | orchestrator | 2026-03-28 00:49:43.484069 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-28 00:49:43.484073 | orchestrator | Saturday 28 March 2026 00:48:48 +0000 (0:00:03.223) 0:00:38.625 ******** 2026-03-28 00:49:43.484077 | orchestrator | 2026-03-28 00:49:43.484081 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-28 00:49:43.484084 | orchestrator | Saturday 28 March 2026 00:48:49 +0000 (0:00:00.531) 0:00:39.157 ******** 2026-03-28 00:49:43.484088 | orchestrator | 2026-03-28 00:49:43.484092 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-28 00:49:43.484095 | orchestrator | Saturday 28 March 2026 00:48:49 +0000 (0:00:00.170) 0:00:39.327 ******** 2026-03-28 00:49:43.484099 | orchestrator | 2026-03-28 00:49:43.484103 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-28 00:49:43.484107 | orchestrator | Saturday 28 March 2026 00:48:49 +0000 (0:00:00.158) 0:00:39.486 ******** 2026-03-28 00:49:43.484110 | orchestrator | 2026-03-28 00:49:43.484118 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-28 00:49:43.484121 | orchestrator | Saturday 28 March 2026 00:48:49 +0000 (0:00:00.137) 0:00:39.623 ******** 2026-03-28 00:49:43.484125 | orchestrator | 2026-03-28 00:49:43.484149 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-28 00:49:43.484155 | orchestrator | Saturday 28 March 2026 00:48:50 +0000 (0:00:00.256) 0:00:39.880 ******** 2026-03-28 00:49:43.484160 | orchestrator | 2026-03-28 00:49:43.484166 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-03-28 00:49:43.484171 | orchestrator | Saturday 28 March 2026 00:48:50 +0000 (0:00:00.251) 0:00:40.132 ******** 2026-03-28 00:49:43.484175 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:49:43.484179 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:49:43.484189 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:49:43.484193 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:49:43.484197 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:49:43.484206 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:49:43.484209 | orchestrator | 2026-03-28 00:49:43.484315 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-03-28 00:49:43.484320 | orchestrator | Saturday 28 March 2026 00:49:02 +0000 (0:00:12.415) 0:00:52.547 ******** 2026-03-28 00:49:43.484324 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:49:43.484328 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:49:43.484332 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:49:43.484335 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:49:43.484339 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:49:43.484342 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:49:43.484346 | orchestrator | 2026-03-28 00:49:43.484350 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-28 00:49:43.484354 | orchestrator | Saturday 28 March 2026 00:49:04 +0000 (0:00:01.756) 0:00:54.304 ******** 2026-03-28 00:49:43.484357 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:49:43.484361 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:49:43.484365 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:49:43.484368 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:49:43.484372 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:49:43.484376 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:49:43.484379 | orchestrator | 2026-03-28 00:49:43.484383 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-03-28 00:49:43.484387 | orchestrator | Saturday 28 March 2026 00:49:15 +0000 (0:00:11.525) 0:01:05.830 ******** 2026-03-28 00:49:43.484390 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-03-28 00:49:43.484395 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-03-28 00:49:43.484398 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-03-28 00:49:43.484402 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-03-28 00:49:43.484409 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-03-28 00:49:43.484413 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-03-28 00:49:43.484417 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-03-28 00:49:43.484420 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-03-28 00:49:43.484424 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-03-28 00:49:43.484427 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-03-28 00:49:43.484431 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-03-28 00:49:43.484435 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-03-28 00:49:43.484438 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-28 00:49:43.484442 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-28 00:49:43.484446 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-28 00:49:43.484449 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-28 00:49:43.484453 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-28 00:49:43.484461 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-28 00:49:43.484465 | orchestrator | 2026-03-28 00:49:43.484468 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-03-28 00:49:43.484472 | orchestrator | Saturday 28 March 2026 00:49:24 +0000 (0:00:08.341) 0:01:14.172 ******** 2026-03-28 00:49:43.484476 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-03-28 00:49:43.484480 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:49:43.484483 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-03-28 00:49:43.484490 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:49:43.484494 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-03-28 00:49:43.484498 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:49:43.484502 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-03-28 00:49:43.484506 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-03-28 00:49:43.484510 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-03-28 00:49:43.484513 | orchestrator | 2026-03-28 00:49:43.484517 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-03-28 00:49:43.484521 | orchestrator | Saturday 28 March 2026 00:49:27 +0000 (0:00:02.723) 0:01:16.895 ******** 2026-03-28 00:49:43.484525 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-03-28 00:49:43.484528 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:49:43.484532 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-03-28 00:49:43.484536 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:49:43.484539 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-03-28 00:49:43.484543 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:49:43.484547 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-03-28 00:49:43.484551 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-03-28 00:49:43.484554 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-03-28 00:49:43.484558 | orchestrator | 2026-03-28 00:49:43.484562 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-28 00:49:43.484565 | orchestrator | Saturday 28 March 2026 00:49:30 +0000 (0:00:03.563) 0:01:20.459 ******** 2026-03-28 00:49:43.484569 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:49:43.484573 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:49:43.484576 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:49:43.484580 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:49:43.484583 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:49:43.484587 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:49:43.484591 | orchestrator | 2026-03-28 00:49:43.484594 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:49:43.484599 | orchestrator | testbed-node-0 : ok=16  changed=12  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-28 00:49:43.484603 | orchestrator | testbed-node-1 : ok=16  changed=12  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-28 00:49:43.484607 | orchestrator | testbed-node-2 : ok=16  changed=12  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-28 00:49:43.484611 | orchestrator | testbed-node-3 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-28 00:49:43.484618 | orchestrator | testbed-node-4 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-28 00:49:43.484621 | orchestrator | testbed-node-5 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-28 00:49:43.484629 | orchestrator | 2026-03-28 00:49:43.484633 | orchestrator | 2026-03-28 00:49:43.484636 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:49:43.484640 | orchestrator | Saturday 28 March 2026 00:49:39 +0000 (0:00:09.322) 0:01:29.782 ******** 2026-03-28 00:49:43.484644 | orchestrator | =============================================================================== 2026-03-28 00:49:43.484647 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 20.85s 2026-03-28 00:49:43.484651 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 12.42s 2026-03-28 00:49:43.484655 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.34s 2026-03-28 00:49:43.484658 | orchestrator | openvswitch : Copying over config.json files for services --------------- 5.57s 2026-03-28 00:49:43.484662 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 4.77s 2026-03-28 00:49:43.484666 | orchestrator | service-check-containers : openvswitch | Check containers --------------- 4.75s 2026-03-28 00:49:43.484669 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.56s 2026-03-28 00:49:43.484673 | orchestrator | module-load : Load modules ---------------------------------------------- 3.44s 2026-03-28 00:49:43.484677 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 3.30s 2026-03-28 00:49:43.484680 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.22s 2026-03-28 00:49:43.484684 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.72s 2026-03-28 00:49:43.484688 | orchestrator | module-load : Drop module persistence ----------------------------------- 2.65s 2026-03-28 00:49:43.484691 | orchestrator | openvswitch : include_tasks --------------------------------------------- 2.54s 2026-03-28 00:49:43.484695 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 2.17s 2026-03-28 00:49:43.484699 | orchestrator | service-check-containers : openvswitch | Notify handlers to restart containers --- 1.77s 2026-03-28 00:49:43.484702 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.76s 2026-03-28 00:49:43.484706 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.63s 2026-03-28 00:49:43.484710 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.51s 2026-03-28 00:49:43.484715 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.19s 2026-03-28 00:49:43.484719 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.93s 2026-03-28 00:49:46.593002 | orchestrator | 2026-03-28 00:49:46 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:49:46.593103 | orchestrator | 2026-03-28 00:49:46 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:49:46.594354 | orchestrator | 2026-03-28 00:49:46 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:49:46.601447 | orchestrator | 2026-03-28 00:49:46 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:49:46.602723 | orchestrator | 2026-03-28 00:49:46 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:49:46.605621 | orchestrator | 2026-03-28 00:49:46 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:49:49.660416 | orchestrator | 2026-03-28 00:49:49 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:49:49.663429 | orchestrator | 2026-03-28 00:49:49 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:49:49.664915 | orchestrator | 2026-03-28 00:49:49 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:49:49.669112 | orchestrator | 2026-03-28 00:49:49 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:49:49.670664 | orchestrator | 2026-03-28 00:49:49 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:49:49.670870 | orchestrator | 2026-03-28 00:49:49 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:49:52.793716 | orchestrator | 2026-03-28 00:49:52 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:49:52.795311 | orchestrator | 2026-03-28 00:49:52 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:49:52.796429 | orchestrator | 2026-03-28 00:49:52 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:49:52.798795 | orchestrator | 2026-03-28 00:49:52 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:49:52.799892 | orchestrator | 2026-03-28 00:49:52 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:49:52.799933 | orchestrator | 2026-03-28 00:49:52 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:49:55.932228 | orchestrator | 2026-03-28 00:49:55 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:49:55.935453 | orchestrator | 2026-03-28 00:49:55 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:49:55.935650 | orchestrator | 2026-03-28 00:49:55 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:49:55.936727 | orchestrator | 2026-03-28 00:49:55 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:49:55.937724 | orchestrator | 2026-03-28 00:49:55 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:49:55.937750 | orchestrator | 2026-03-28 00:49:55 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:49:59.159594 | orchestrator | 2026-03-28 00:49:59 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:49:59.159738 | orchestrator | 2026-03-28 00:49:59 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:49:59.159756 | orchestrator | 2026-03-28 00:49:59 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:49:59.163683 | orchestrator | 2026-03-28 00:49:59 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:49:59.164430 | orchestrator | 2026-03-28 00:49:59 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:49:59.164493 | orchestrator | 2026-03-28 00:49:59 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:50:02.236642 | orchestrator | 2026-03-28 00:50:02 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:50:02.241144 | orchestrator | 2026-03-28 00:50:02 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:50:02.242340 | orchestrator | 2026-03-28 00:50:02 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:50:02.243339 | orchestrator | 2026-03-28 00:50:02 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:50:02.244185 | orchestrator | 2026-03-28 00:50:02 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:50:02.244214 | orchestrator | 2026-03-28 00:50:02 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:50:05.304638 | orchestrator | 2026-03-28 00:50:05 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:50:05.305681 | orchestrator | 2026-03-28 00:50:05 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:50:05.347357 | orchestrator | 2026-03-28 00:50:05 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:50:05.350470 | orchestrator | 2026-03-28 00:50:05 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:50:05.355686 | orchestrator | 2026-03-28 00:50:05 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:50:05.355761 | orchestrator | 2026-03-28 00:50:05 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:50:08.471442 | orchestrator | 2026-03-28 00:50:08 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:50:08.474239 | orchestrator | 2026-03-28 00:50:08 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:50:08.476692 | orchestrator | 2026-03-28 00:50:08 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:50:08.485358 | orchestrator | 2026-03-28 00:50:08 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:50:08.488112 | orchestrator | 2026-03-28 00:50:08 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:50:08.489819 | orchestrator | 2026-03-28 00:50:08 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:50:11.622528 | orchestrator | 2026-03-28 00:50:11 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:50:11.622950 | orchestrator | 2026-03-28 00:50:11 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:50:11.624048 | orchestrator | 2026-03-28 00:50:11 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:50:11.625027 | orchestrator | 2026-03-28 00:50:11 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state STARTED 2026-03-28 00:50:11.627997 | orchestrator | 2026-03-28 00:50:11 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:50:11.628052 | orchestrator | 2026-03-28 00:50:11 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:50:14.690534 | orchestrator | 2026-03-28 00:50:14 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:50:14.691568 | orchestrator | 2026-03-28 00:50:14 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:50:14.693074 | orchestrator | 2026-03-28 00:50:14 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:50:14.696202 | orchestrator | 2026-03-28 00:50:14 | INFO  | Task 54f07996-dc36-4b59-aa41-66de83b2ec79 is in state SUCCESS 2026-03-28 00:50:14.697979 | orchestrator | 2026-03-28 00:50:14.698206 | orchestrator | 2026-03-28 00:50:14.698228 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-03-28 00:50:14.698240 | orchestrator | 2026-03-28 00:50:14.698250 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-03-28 00:50:14.698260 | orchestrator | Saturday 28 March 2026 00:45:09 +0000 (0:00:00.280) 0:00:00.280 ******** 2026-03-28 00:50:14.698270 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:50:14.698281 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:50:14.698291 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:50:14.698300 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:50:14.698310 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:50:14.698319 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:50:14.698329 | orchestrator | 2026-03-28 00:50:14.698339 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-03-28 00:50:14.698348 | orchestrator | Saturday 28 March 2026 00:45:10 +0000 (0:00:00.826) 0:00:01.107 ******** 2026-03-28 00:50:14.698358 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:50:14.698369 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:50:14.698378 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:50:14.698387 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:50:14.698418 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:50:14.698428 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:50:14.698438 | orchestrator | 2026-03-28 00:50:14.698447 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-03-28 00:50:14.698457 | orchestrator | Saturday 28 March 2026 00:45:11 +0000 (0:00:01.179) 0:00:02.287 ******** 2026-03-28 00:50:14.698466 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:50:14.698476 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:50:14.698485 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:50:14.698495 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:50:14.698505 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:50:14.698514 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:50:14.698523 | orchestrator | 2026-03-28 00:50:14.698533 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-03-28 00:50:14.698542 | orchestrator | Saturday 28 March 2026 00:45:12 +0000 (0:00:00.854) 0:00:03.141 ******** 2026-03-28 00:50:14.698552 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:50:14.698563 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:50:14.698575 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:50:14.698585 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:50:14.698597 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:50:14.698608 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:50:14.698619 | orchestrator | 2026-03-28 00:50:14.698629 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-03-28 00:50:14.698640 | orchestrator | Saturday 28 March 2026 00:45:15 +0000 (0:00:02.887) 0:00:06.029 ******** 2026-03-28 00:50:14.698651 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:50:14.698662 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:50:14.698673 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:50:14.698683 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:50:14.698695 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:50:14.698706 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:50:14.698716 | orchestrator | 2026-03-28 00:50:14.698727 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-03-28 00:50:14.698738 | orchestrator | Saturday 28 March 2026 00:45:17 +0000 (0:00:02.090) 0:00:08.120 ******** 2026-03-28 00:50:14.698749 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:50:14.698760 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:50:14.698771 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:50:14.698782 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:50:14.698794 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:50:14.698804 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:50:14.698815 | orchestrator | 2026-03-28 00:50:14.698827 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-03-28 00:50:14.698838 | orchestrator | Saturday 28 March 2026 00:45:19 +0000 (0:00:01.885) 0:00:10.006 ******** 2026-03-28 00:50:14.698849 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:50:14.698859 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:50:14.698869 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:50:14.698880 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:50:14.698891 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:50:14.698902 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:50:14.698913 | orchestrator | 2026-03-28 00:50:14.698923 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-03-28 00:50:14.698933 | orchestrator | Saturday 28 March 2026 00:45:21 +0000 (0:00:02.027) 0:00:12.033 ******** 2026-03-28 00:50:14.698942 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:50:14.698952 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:50:14.698961 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:50:14.698970 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:50:14.698980 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:50:14.698989 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:50:14.698999 | orchestrator | 2026-03-28 00:50:14.699008 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-03-28 00:50:14.699028 | orchestrator | Saturday 28 March 2026 00:45:22 +0000 (0:00:01.065) 0:00:13.099 ******** 2026-03-28 00:50:14.699038 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-28 00:50:14.699056 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-28 00:50:14.699065 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:50:14.699075 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-28 00:50:14.699084 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-28 00:50:14.699119 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-28 00:50:14.699129 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-28 00:50:14.699138 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:50:14.699148 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-28 00:50:14.699157 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-28 00:50:14.699179 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:50:14.699190 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-28 00:50:14.699199 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-28 00:50:14.699209 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:50:14.699219 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:50:14.699228 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-28 00:50:14.699238 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-28 00:50:14.699247 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:50:14.699257 | orchestrator | 2026-03-28 00:50:14.699266 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-03-28 00:50:14.699276 | orchestrator | Saturday 28 March 2026 00:45:23 +0000 (0:00:01.491) 0:00:14.591 ******** 2026-03-28 00:50:14.699285 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:50:14.699294 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:50:14.699304 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:50:14.699313 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:50:14.699323 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:50:14.699332 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:50:14.699341 | orchestrator | 2026-03-28 00:50:14.699351 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-03-28 00:50:14.699362 | orchestrator | Saturday 28 March 2026 00:45:25 +0000 (0:00:01.688) 0:00:16.279 ******** 2026-03-28 00:50:14.699371 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:50:14.699381 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:50:14.699390 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:50:14.699400 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:50:14.699409 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:50:14.699419 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:50:14.699428 | orchestrator | 2026-03-28 00:50:14.699438 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-03-28 00:50:14.699448 | orchestrator | Saturday 28 March 2026 00:45:26 +0000 (0:00:01.415) 0:00:17.694 ******** 2026-03-28 00:50:14.699457 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:50:14.699466 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:50:14.699476 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:50:14.699485 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:50:14.699495 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:50:14.699504 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:50:14.699514 | orchestrator | 2026-03-28 00:50:14.699523 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-03-28 00:50:14.699533 | orchestrator | Saturday 28 March 2026 00:45:33 +0000 (0:00:06.376) 0:00:24.071 ******** 2026-03-28 00:50:14.699550 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:50:14.699559 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:50:14.699569 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:50:14.699578 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:50:14.699588 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:50:14.699597 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:50:14.699607 | orchestrator | 2026-03-28 00:50:14.699616 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-03-28 00:50:14.699626 | orchestrator | Saturday 28 March 2026 00:45:34 +0000 (0:00:01.259) 0:00:25.331 ******** 2026-03-28 00:50:14.699635 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:50:14.699645 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:50:14.699654 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:50:14.699663 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:50:14.699673 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:50:14.699682 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:50:14.699697 | orchestrator | 2026-03-28 00:50:14.699714 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-03-28 00:50:14.699731 | orchestrator | Saturday 28 March 2026 00:45:37 +0000 (0:00:03.350) 0:00:28.682 ******** 2026-03-28 00:50:14.699747 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:50:14.699763 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:50:14.699778 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:50:14.699793 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:50:14.699808 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:50:14.699821 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:50:14.699836 | orchestrator | 2026-03-28 00:50:14.699851 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-03-28 00:50:14.699867 | orchestrator | Saturday 28 March 2026 00:45:39 +0000 (0:00:01.628) 0:00:30.311 ******** 2026-03-28 00:50:14.699883 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-03-28 00:50:14.699900 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-03-28 00:50:14.699959 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:50:14.699973 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-03-28 00:50:14.699983 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-03-28 00:50:14.699992 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:50:14.700001 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-03-28 00:50:14.700018 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-03-28 00:50:14.700027 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:50:14.700037 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-03-28 00:50:14.700047 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-03-28 00:50:14.700056 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:50:14.700065 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-03-28 00:50:14.700075 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-03-28 00:50:14.700084 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:50:14.700145 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-03-28 00:50:14.700157 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-03-28 00:50:14.700166 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:50:14.700176 | orchestrator | 2026-03-28 00:50:14.700186 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-03-28 00:50:14.700205 | orchestrator | Saturday 28 March 2026 00:45:40 +0000 (0:00:01.504) 0:00:31.817 ******** 2026-03-28 00:50:14.700216 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:50:14.700225 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:50:14.700235 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:50:14.700244 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:50:14.700253 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:50:14.700271 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:50:14.700281 | orchestrator | 2026-03-28 00:50:14.700290 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-03-28 00:50:14.700300 | orchestrator | Saturday 28 March 2026 00:45:42 +0000 (0:00:01.955) 0:00:33.772 ******** 2026-03-28 00:50:14.700310 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:50:14.700319 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:50:14.700328 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:50:14.700338 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:50:14.700347 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:50:14.700356 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:50:14.700366 | orchestrator | 2026-03-28 00:50:14.700376 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-03-28 00:50:14.700385 | orchestrator | 2026-03-28 00:50:14.700395 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-03-28 00:50:14.700405 | orchestrator | Saturday 28 March 2026 00:45:45 +0000 (0:00:02.780) 0:00:36.552 ******** 2026-03-28 00:50:14.700414 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:50:14.700424 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:50:14.700433 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:50:14.700443 | orchestrator | 2026-03-28 00:50:14.700452 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-03-28 00:50:14.700462 | orchestrator | Saturday 28 March 2026 00:45:47 +0000 (0:00:01.693) 0:00:38.246 ******** 2026-03-28 00:50:14.700472 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:50:14.700481 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:50:14.700491 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:50:14.700500 | orchestrator | 2026-03-28 00:50:14.700510 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-03-28 00:50:14.700519 | orchestrator | Saturday 28 March 2026 00:45:49 +0000 (0:00:01.762) 0:00:40.008 ******** 2026-03-28 00:50:14.700529 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:50:14.700538 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:50:14.700548 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:50:14.700558 | orchestrator | 2026-03-28 00:50:14.700567 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-03-28 00:50:14.700577 | orchestrator | Saturday 28 March 2026 00:45:50 +0000 (0:00:01.420) 0:00:41.428 ******** 2026-03-28 00:50:14.700586 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:50:14.700596 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:50:14.700605 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:50:14.700615 | orchestrator | 2026-03-28 00:50:14.700625 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-03-28 00:50:14.700634 | orchestrator | Saturday 28 March 2026 00:45:52 +0000 (0:00:02.051) 0:00:43.479 ******** 2026-03-28 00:50:14.700644 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:50:14.700653 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:50:14.700663 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:50:14.700672 | orchestrator | 2026-03-28 00:50:14.700682 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-03-28 00:50:14.700691 | orchestrator | Saturday 28 March 2026 00:45:53 +0000 (0:00:00.483) 0:00:43.963 ******** 2026-03-28 00:50:14.700701 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:50:14.700710 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:50:14.700720 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:50:14.700729 | orchestrator | 2026-03-28 00:50:14.700739 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-03-28 00:50:14.700749 | orchestrator | Saturday 28 March 2026 00:45:53 +0000 (0:00:00.775) 0:00:44.738 ******** 2026-03-28 00:50:14.700758 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:50:14.700768 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:50:14.700777 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:50:14.700787 | orchestrator | 2026-03-28 00:50:14.700796 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-03-28 00:50:14.700813 | orchestrator | Saturday 28 March 2026 00:45:55 +0000 (0:00:01.807) 0:00:46.546 ******** 2026-03-28 00:50:14.700822 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:50:14.700832 | orchestrator | 2026-03-28 00:50:14.700842 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-03-28 00:50:14.700851 | orchestrator | Saturday 28 March 2026 00:45:56 +0000 (0:00:01.023) 0:00:47.570 ******** 2026-03-28 00:50:14.700861 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:50:14.700870 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:50:14.700880 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:50:14.700893 | orchestrator | 2026-03-28 00:50:14.700910 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-03-28 00:50:14.700926 | orchestrator | Saturday 28 March 2026 00:46:00 +0000 (0:00:03.547) 0:00:51.118 ******** 2026-03-28 00:50:14.700979 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:50:14.700995 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:50:14.701012 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:50:14.701029 | orchestrator | 2026-03-28 00:50:14.701052 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-03-28 00:50:14.701070 | orchestrator | Saturday 28 March 2026 00:46:01 +0000 (0:00:01.338) 0:00:52.456 ******** 2026-03-28 00:50:14.701113 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:50:14.701132 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:50:14.701147 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:50:14.701163 | orchestrator | 2026-03-28 00:50:14.701178 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-03-28 00:50:14.701191 | orchestrator | Saturday 28 March 2026 00:46:03 +0000 (0:00:02.311) 0:00:54.768 ******** 2026-03-28 00:50:14.701207 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:50:14.701223 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:50:14.701240 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:50:14.701257 | orchestrator | 2026-03-28 00:50:14.701274 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-03-28 00:50:14.701295 | orchestrator | Saturday 28 March 2026 00:46:06 +0000 (0:00:02.266) 0:00:57.034 ******** 2026-03-28 00:50:14.701305 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:50:14.701349 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:50:14.701359 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:50:14.701369 | orchestrator | 2026-03-28 00:50:14.701378 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-03-28 00:50:14.701388 | orchestrator | Saturday 28 March 2026 00:46:06 +0000 (0:00:00.319) 0:00:57.354 ******** 2026-03-28 00:50:14.701397 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:50:14.701407 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:50:14.701416 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:50:14.701426 | orchestrator | 2026-03-28 00:50:14.701436 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-03-28 00:50:14.701445 | orchestrator | Saturday 28 March 2026 00:46:06 +0000 (0:00:00.505) 0:00:57.860 ******** 2026-03-28 00:50:14.701455 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:50:14.701464 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:50:14.701474 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:50:14.701483 | orchestrator | 2026-03-28 00:50:14.701493 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-03-28 00:50:14.701502 | orchestrator | Saturday 28 March 2026 00:46:10 +0000 (0:00:03.060) 0:01:00.920 ******** 2026-03-28 00:50:14.701512 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:50:14.701522 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:50:14.701531 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:50:14.701541 | orchestrator | 2026-03-28 00:50:14.701550 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-03-28 00:50:14.701560 | orchestrator | Saturday 28 March 2026 00:46:13 +0000 (0:00:02.986) 0:01:03.907 ******** 2026-03-28 00:50:14.701580 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:50:14.701590 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:50:14.701599 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:50:14.701609 | orchestrator | 2026-03-28 00:50:14.701618 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-03-28 00:50:14.701628 | orchestrator | Saturday 28 March 2026 00:46:13 +0000 (0:00:00.788) 0:01:04.696 ******** 2026-03-28 00:50:14.701638 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-28 00:50:14.701648 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-28 00:50:14.701658 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-28 00:50:14.701668 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-28 00:50:14.701678 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-28 00:50:14.701687 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-28 00:50:14.701697 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-28 00:50:14.701706 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-28 00:50:14.701716 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-28 00:50:14.701725 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-28 00:50:14.701735 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-28 00:50:14.701744 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-28 00:50:14.701754 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:50:14.701764 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:50:14.701773 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:50:14.701783 | orchestrator | 2026-03-28 00:50:14.701793 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-03-28 00:50:14.701802 | orchestrator | Saturday 28 March 2026 00:46:57 +0000 (0:00:43.554) 0:01:48.250 ******** 2026-03-28 00:50:14.701817 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:50:14.701827 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:50:14.701837 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:50:14.701846 | orchestrator | 2026-03-28 00:50:14.701856 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-03-28 00:50:14.701865 | orchestrator | Saturday 28 March 2026 00:46:57 +0000 (0:00:00.564) 0:01:48.814 ******** 2026-03-28 00:50:14.701875 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:50:14.701884 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:50:14.701894 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:50:14.701903 | orchestrator | 2026-03-28 00:50:14.701913 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-03-28 00:50:14.701922 | orchestrator | Saturday 28 March 2026 00:46:59 +0000 (0:00:01.108) 0:01:49.923 ******** 2026-03-28 00:50:14.701932 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:50:14.701941 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:50:14.701951 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:50:14.701961 | orchestrator | 2026-03-28 00:50:14.701984 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-03-28 00:50:14.701994 | orchestrator | Saturday 28 March 2026 00:47:00 +0000 (0:00:01.418) 0:01:51.342 ******** 2026-03-28 00:50:14.702003 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:50:14.702013 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:50:14.702071 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:50:14.702082 | orchestrator | 2026-03-28 00:50:14.702118 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-03-28 00:50:14.702129 | orchestrator | Saturday 28 March 2026 00:47:24 +0000 (0:00:24.529) 0:02:15.871 ******** 2026-03-28 00:50:14.702138 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:50:14.702148 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:50:14.702158 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:50:14.702167 | orchestrator | 2026-03-28 00:50:14.702177 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-03-28 00:50:14.702187 | orchestrator | Saturday 28 March 2026 00:47:25 +0000 (0:00:00.776) 0:02:16.648 ******** 2026-03-28 00:50:14.702196 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:50:14.702206 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:50:14.702216 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:50:14.702225 | orchestrator | 2026-03-28 00:50:14.702235 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-03-28 00:50:14.702245 | orchestrator | Saturday 28 March 2026 00:47:27 +0000 (0:00:01.485) 0:02:18.134 ******** 2026-03-28 00:50:14.702254 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:50:14.702264 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:50:14.702274 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:50:14.702283 | orchestrator | 2026-03-28 00:50:14.702293 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-03-28 00:50:14.702302 | orchestrator | Saturday 28 March 2026 00:47:27 +0000 (0:00:00.681) 0:02:18.815 ******** 2026-03-28 00:50:14.702312 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:50:14.702322 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:50:14.702331 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:50:14.702341 | orchestrator | 2026-03-28 00:50:14.702351 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-03-28 00:50:14.702360 | orchestrator | Saturday 28 March 2026 00:47:28 +0000 (0:00:00.799) 0:02:19.614 ******** 2026-03-28 00:50:14.702370 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:50:14.702380 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:50:14.702389 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:50:14.702399 | orchestrator | 2026-03-28 00:50:14.702408 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-03-28 00:50:14.702418 | orchestrator | Saturday 28 March 2026 00:47:29 +0000 (0:00:00.321) 0:02:19.936 ******** 2026-03-28 00:50:14.702428 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:50:14.702437 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:50:14.702447 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:50:14.702457 | orchestrator | 2026-03-28 00:50:14.702466 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-03-28 00:50:14.702476 | orchestrator | Saturday 28 March 2026 00:47:30 +0000 (0:00:01.053) 0:02:20.989 ******** 2026-03-28 00:50:14.702485 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:50:14.702495 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:50:14.702504 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:50:14.702514 | orchestrator | 2026-03-28 00:50:14.702524 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-03-28 00:50:14.702533 | orchestrator | Saturday 28 March 2026 00:47:30 +0000 (0:00:00.745) 0:02:21.735 ******** 2026-03-28 00:50:14.702543 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:50:14.702552 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:50:14.702562 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:50:14.702571 | orchestrator | 2026-03-28 00:50:14.702581 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-03-28 00:50:14.702591 | orchestrator | Saturday 28 March 2026 00:47:31 +0000 (0:00:00.996) 0:02:22.731 ******** 2026-03-28 00:50:14.702606 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:50:14.702616 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:50:14.702627 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:50:14.702643 | orchestrator | 2026-03-28 00:50:14.702659 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-03-28 00:50:14.702674 | orchestrator | Saturday 28 March 2026 00:47:32 +0000 (0:00:00.976) 0:02:23.708 ******** 2026-03-28 00:50:14.702702 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:50:14.702719 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:50:14.702734 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:50:14.702749 | orchestrator | 2026-03-28 00:50:14.702763 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-03-28 00:50:14.702779 | orchestrator | Saturday 28 March 2026 00:47:33 +0000 (0:00:00.722) 0:02:24.430 ******** 2026-03-28 00:50:14.702795 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:50:14.702812 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:50:14.702829 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:50:14.702846 | orchestrator | 2026-03-28 00:50:14.702864 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-03-28 00:50:14.702881 | orchestrator | Saturday 28 March 2026 00:47:33 +0000 (0:00:00.404) 0:02:24.834 ******** 2026-03-28 00:50:14.702898 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:50:14.702917 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:50:14.702942 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:50:14.702953 | orchestrator | 2026-03-28 00:50:14.702963 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-03-28 00:50:14.702972 | orchestrator | Saturday 28 March 2026 00:47:35 +0000 (0:00:01.130) 0:02:25.965 ******** 2026-03-28 00:50:14.702982 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:50:14.702991 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:50:14.703001 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:50:14.703010 | orchestrator | 2026-03-28 00:50:14.703020 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-03-28 00:50:14.703031 | orchestrator | Saturday 28 March 2026 00:47:35 +0000 (0:00:00.896) 0:02:26.861 ******** 2026-03-28 00:50:14.703040 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-28 00:50:14.703061 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-28 00:50:14.703071 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-28 00:50:14.703081 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-28 00:50:14.703146 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-28 00:50:14.703157 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-28 00:50:14.703167 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-28 00:50:14.703177 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-28 00:50:14.703187 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-03-28 00:50:14.703197 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-28 00:50:14.703206 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-28 00:50:14.703216 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-03-28 00:50:14.703226 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-28 00:50:14.703235 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-28 00:50:14.703255 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-28 00:50:14.703265 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-28 00:50:14.703275 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-28 00:50:14.703284 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-28 00:50:14.703294 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-28 00:50:14.703303 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-28 00:50:14.703313 | orchestrator | 2026-03-28 00:50:14.703322 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-03-28 00:50:14.703332 | orchestrator | 2026-03-28 00:50:14.703342 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-03-28 00:50:14.703352 | orchestrator | Saturday 28 March 2026 00:47:39 +0000 (0:00:03.382) 0:02:30.244 ******** 2026-03-28 00:50:14.703361 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:50:14.703371 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:50:14.703381 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:50:14.703390 | orchestrator | 2026-03-28 00:50:14.703400 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-03-28 00:50:14.703410 | orchestrator | Saturday 28 March 2026 00:47:39 +0000 (0:00:00.329) 0:02:30.573 ******** 2026-03-28 00:50:14.703419 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:50:14.703428 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:50:14.703436 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:50:14.703444 | orchestrator | 2026-03-28 00:50:14.703452 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-03-28 00:50:14.703460 | orchestrator | Saturday 28 March 2026 00:47:40 +0000 (0:00:00.695) 0:02:31.269 ******** 2026-03-28 00:50:14.703468 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:50:14.703476 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:50:14.703484 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:50:14.703492 | orchestrator | 2026-03-28 00:50:14.703500 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-03-28 00:50:14.703508 | orchestrator | Saturday 28 March 2026 00:47:40 +0000 (0:00:00.600) 0:02:31.870 ******** 2026-03-28 00:50:14.703515 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:50:14.703523 | orchestrator | 2026-03-28 00:50:14.703531 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-03-28 00:50:14.703539 | orchestrator | Saturday 28 March 2026 00:47:41 +0000 (0:00:00.484) 0:02:32.354 ******** 2026-03-28 00:50:14.703547 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:50:14.703555 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:50:14.703563 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:50:14.703571 | orchestrator | 2026-03-28 00:50:14.703579 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-03-28 00:50:14.703586 | orchestrator | Saturday 28 March 2026 00:47:41 +0000 (0:00:00.271) 0:02:32.625 ******** 2026-03-28 00:50:14.703603 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:50:14.703612 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:50:14.703620 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:50:14.703627 | orchestrator | 2026-03-28 00:50:14.703635 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-03-28 00:50:14.703643 | orchestrator | Saturday 28 March 2026 00:47:42 +0000 (0:00:00.425) 0:02:33.051 ******** 2026-03-28 00:50:14.703651 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:50:14.703659 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:50:14.703667 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:50:14.703675 | orchestrator | 2026-03-28 00:50:14.703683 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-03-28 00:50:14.703697 | orchestrator | Saturday 28 March 2026 00:47:42 +0000 (0:00:00.282) 0:02:33.334 ******** 2026-03-28 00:50:14.703705 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:50:14.703713 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:50:14.703721 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:50:14.703729 | orchestrator | 2026-03-28 00:50:14.703743 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-03-28 00:50:14.703752 | orchestrator | Saturday 28 March 2026 00:47:43 +0000 (0:00:00.581) 0:02:33.915 ******** 2026-03-28 00:50:14.703760 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:50:14.703768 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:50:14.703775 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:50:14.703783 | orchestrator | 2026-03-28 00:50:14.703791 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-03-28 00:50:14.703799 | orchestrator | Saturday 28 March 2026 00:47:44 +0000 (0:00:01.098) 0:02:35.013 ******** 2026-03-28 00:50:14.703807 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:50:14.703815 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:50:14.703822 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:50:14.703830 | orchestrator | 2026-03-28 00:50:14.703838 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-03-28 00:50:14.703846 | orchestrator | Saturday 28 March 2026 00:47:45 +0000 (0:00:01.416) 0:02:36.430 ******** 2026-03-28 00:50:14.703854 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:50:14.703861 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:50:14.703869 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:50:14.703877 | orchestrator | 2026-03-28 00:50:14.703885 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-28 00:50:14.703893 | orchestrator | 2026-03-28 00:50:14.703900 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-28 00:50:14.703908 | orchestrator | Saturday 28 March 2026 00:47:57 +0000 (0:00:11.581) 0:02:48.011 ******** 2026-03-28 00:50:14.703916 | orchestrator | ok: [testbed-manager] 2026-03-28 00:50:14.703924 | orchestrator | 2026-03-28 00:50:14.703931 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-28 00:50:14.703939 | orchestrator | Saturday 28 March 2026 00:47:57 +0000 (0:00:00.807) 0:02:48.819 ******** 2026-03-28 00:50:14.703947 | orchestrator | changed: [testbed-manager] 2026-03-28 00:50:14.703955 | orchestrator | 2026-03-28 00:50:14.703963 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-28 00:50:14.703971 | orchestrator | Saturday 28 March 2026 00:47:58 +0000 (0:00:00.440) 0:02:49.260 ******** 2026-03-28 00:50:14.703978 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-28 00:50:14.703986 | orchestrator | 2026-03-28 00:50:14.703994 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-28 00:50:14.704002 | orchestrator | Saturday 28 March 2026 00:47:58 +0000 (0:00:00.539) 0:02:49.800 ******** 2026-03-28 00:50:14.704014 | orchestrator | changed: [testbed-manager] 2026-03-28 00:50:14.704028 | orchestrator | 2026-03-28 00:50:14.704043 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-28 00:50:14.704056 | orchestrator | Saturday 28 March 2026 00:47:59 +0000 (0:00:00.945) 0:02:50.745 ******** 2026-03-28 00:50:14.704070 | orchestrator | changed: [testbed-manager] 2026-03-28 00:50:14.704083 | orchestrator | 2026-03-28 00:50:14.704113 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-28 00:50:14.704126 | orchestrator | Saturday 28 March 2026 00:48:00 +0000 (0:00:00.586) 0:02:51.331 ******** 2026-03-28 00:50:14.704138 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-28 00:50:14.704150 | orchestrator | 2026-03-28 00:50:14.704162 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-28 00:50:14.704174 | orchestrator | Saturday 28 March 2026 00:48:02 +0000 (0:00:01.755) 0:02:53.086 ******** 2026-03-28 00:50:14.704187 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-28 00:50:14.704208 | orchestrator | 2026-03-28 00:50:14.704220 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-28 00:50:14.704233 | orchestrator | Saturday 28 March 2026 00:48:03 +0000 (0:00:00.928) 0:02:54.015 ******** 2026-03-28 00:50:14.704245 | orchestrator | changed: [testbed-manager] 2026-03-28 00:50:14.704258 | orchestrator | 2026-03-28 00:50:14.704271 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-28 00:50:14.704283 | orchestrator | Saturday 28 March 2026 00:48:03 +0000 (0:00:00.440) 0:02:54.456 ******** 2026-03-28 00:50:14.704297 | orchestrator | changed: [testbed-manager] 2026-03-28 00:50:14.704311 | orchestrator | 2026-03-28 00:50:14.704323 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-03-28 00:50:14.704337 | orchestrator | 2026-03-28 00:50:14.704350 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-03-28 00:50:14.704363 | orchestrator | Saturday 28 March 2026 00:48:04 +0000 (0:00:00.458) 0:02:54.914 ******** 2026-03-28 00:50:14.704376 | orchestrator | ok: [testbed-manager] 2026-03-28 00:50:14.704390 | orchestrator | 2026-03-28 00:50:14.704403 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-03-28 00:50:14.704417 | orchestrator | Saturday 28 March 2026 00:48:04 +0000 (0:00:00.159) 0:02:55.074 ******** 2026-03-28 00:50:14.704430 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-03-28 00:50:14.704443 | orchestrator | 2026-03-28 00:50:14.704457 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-03-28 00:50:14.704473 | orchestrator | Saturday 28 March 2026 00:48:04 +0000 (0:00:00.231) 0:02:55.305 ******** 2026-03-28 00:50:14.704481 | orchestrator | ok: [testbed-manager] 2026-03-28 00:50:14.704489 | orchestrator | 2026-03-28 00:50:14.704497 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-03-28 00:50:14.704505 | orchestrator | Saturday 28 March 2026 00:48:05 +0000 (0:00:01.056) 0:02:56.362 ******** 2026-03-28 00:50:14.704513 | orchestrator | ok: [testbed-manager] 2026-03-28 00:50:14.704521 | orchestrator | 2026-03-28 00:50:14.704528 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-03-28 00:50:14.704536 | orchestrator | Saturday 28 March 2026 00:48:06 +0000 (0:00:01.368) 0:02:57.730 ******** 2026-03-28 00:50:14.704544 | orchestrator | changed: [testbed-manager] 2026-03-28 00:50:14.704552 | orchestrator | 2026-03-28 00:50:14.704560 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-03-28 00:50:14.704568 | orchestrator | Saturday 28 March 2026 00:48:07 +0000 (0:00:00.772) 0:02:58.503 ******** 2026-03-28 00:50:14.704576 | orchestrator | ok: [testbed-manager] 2026-03-28 00:50:14.704584 | orchestrator | 2026-03-28 00:50:14.704602 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-03-28 00:50:14.704610 | orchestrator | Saturday 28 March 2026 00:48:08 +0000 (0:00:00.407) 0:02:58.911 ******** 2026-03-28 00:50:14.704617 | orchestrator | changed: [testbed-manager] 2026-03-28 00:50:14.704625 | orchestrator | 2026-03-28 00:50:14.704633 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-03-28 00:50:14.704641 | orchestrator | Saturday 28 March 2026 00:48:17 +0000 (0:00:09.715) 0:03:08.626 ******** 2026-03-28 00:50:14.704649 | orchestrator | changed: [testbed-manager] 2026-03-28 00:50:14.704656 | orchestrator | 2026-03-28 00:50:14.704664 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-03-28 00:50:14.704672 | orchestrator | Saturday 28 March 2026 00:48:34 +0000 (0:00:16.817) 0:03:25.443 ******** 2026-03-28 00:50:14.704679 | orchestrator | ok: [testbed-manager] 2026-03-28 00:50:14.704687 | orchestrator | 2026-03-28 00:50:14.704695 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-03-28 00:50:14.704703 | orchestrator | 2026-03-28 00:50:14.704711 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-03-28 00:50:14.704718 | orchestrator | Saturday 28 March 2026 00:48:35 +0000 (0:00:00.798) 0:03:26.242 ******** 2026-03-28 00:50:14.704734 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:50:14.704741 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:50:14.704749 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:50:14.704757 | orchestrator | 2026-03-28 00:50:14.704764 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-03-28 00:50:14.704772 | orchestrator | Saturday 28 March 2026 00:48:36 +0000 (0:00:00.754) 0:03:26.997 ******** 2026-03-28 00:50:14.704780 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:50:14.704788 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:50:14.704796 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:50:14.704803 | orchestrator | 2026-03-28 00:50:14.704811 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-03-28 00:50:14.704819 | orchestrator | Saturday 28 March 2026 00:48:36 +0000 (0:00:00.535) 0:03:27.533 ******** 2026-03-28 00:50:14.704826 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:50:14.704834 | orchestrator | 2026-03-28 00:50:14.704842 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-03-28 00:50:14.704850 | orchestrator | Saturday 28 March 2026 00:48:37 +0000 (0:00:00.619) 0:03:28.152 ******** 2026-03-28 00:50:14.704858 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-28 00:50:14.704865 | orchestrator | 2026-03-28 00:50:14.704873 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-03-28 00:50:14.704881 | orchestrator | Saturday 28 March 2026 00:48:38 +0000 (0:00:01.520) 0:03:29.673 ******** 2026-03-28 00:50:14.704889 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 00:50:14.704897 | orchestrator | 2026-03-28 00:50:14.704905 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-03-28 00:50:14.704912 | orchestrator | Saturday 28 March 2026 00:48:39 +0000 (0:00:01.078) 0:03:30.752 ******** 2026-03-28 00:50:14.704920 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:50:14.704928 | orchestrator | 2026-03-28 00:50:14.704936 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-03-28 00:50:14.704944 | orchestrator | Saturday 28 March 2026 00:48:40 +0000 (0:00:00.361) 0:03:31.113 ******** 2026-03-28 00:50:14.704951 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 00:50:14.704959 | orchestrator | 2026-03-28 00:50:14.704967 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-03-28 00:50:14.704975 | orchestrator | Saturday 28 March 2026 00:48:41 +0000 (0:00:01.279) 0:03:32.393 ******** 2026-03-28 00:50:14.704982 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:50:14.704990 | orchestrator | 2026-03-28 00:50:14.704998 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-03-28 00:50:14.705006 | orchestrator | Saturday 28 March 2026 00:48:41 +0000 (0:00:00.185) 0:03:32.579 ******** 2026-03-28 00:50:14.705014 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:50:14.705021 | orchestrator | 2026-03-28 00:50:14.705029 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-03-28 00:50:14.705037 | orchestrator | Saturday 28 March 2026 00:48:41 +0000 (0:00:00.231) 0:03:32.810 ******** 2026-03-28 00:50:14.705045 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:50:14.705052 | orchestrator | 2026-03-28 00:50:14.705060 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-03-28 00:50:14.705068 | orchestrator | Saturday 28 March 2026 00:48:42 +0000 (0:00:00.123) 0:03:32.934 ******** 2026-03-28 00:50:14.705076 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:50:14.705084 | orchestrator | 2026-03-28 00:50:14.705134 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-03-28 00:50:14.705142 | orchestrator | Saturday 28 March 2026 00:48:42 +0000 (0:00:00.138) 0:03:33.072 ******** 2026-03-28 00:50:14.705150 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-28 00:50:14.705158 | orchestrator | 2026-03-28 00:50:14.705166 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-03-28 00:50:14.705174 | orchestrator | Saturday 28 March 2026 00:48:48 +0000 (0:00:06.508) 0:03:39.581 ******** 2026-03-28 00:50:14.705187 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-03-28 00:50:14.705195 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-03-28 00:50:14.705204 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-03-28 00:50:14.705212 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-03-28 00:50:14.705220 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-03-28 00:50:14.705228 | orchestrator | 2026-03-28 00:50:14.705235 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-03-28 00:50:14.705244 | orchestrator | Saturday 28 March 2026 00:49:33 +0000 (0:00:44.449) 0:04:24.030 ******** 2026-03-28 00:50:14.705259 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 00:50:14.705267 | orchestrator | 2026-03-28 00:50:14.705275 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-03-28 00:50:14.705283 | orchestrator | Saturday 28 March 2026 00:49:34 +0000 (0:00:01.201) 0:04:25.231 ******** 2026-03-28 00:50:14.705291 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-28 00:50:14.705299 | orchestrator | 2026-03-28 00:50:14.705307 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-03-28 00:50:14.705315 | orchestrator | Saturday 28 March 2026 00:49:36 +0000 (0:00:01.878) 0:04:27.109 ******** 2026-03-28 00:50:14.705322 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-28 00:50:14.705330 | orchestrator | 2026-03-28 00:50:14.705338 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-03-28 00:50:14.705346 | orchestrator | Saturday 28 March 2026 00:49:37 +0000 (0:00:01.289) 0:04:28.398 ******** 2026-03-28 00:50:14.705354 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:50:14.705362 | orchestrator | 2026-03-28 00:50:14.705370 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-03-28 00:50:14.705378 | orchestrator | Saturday 28 March 2026 00:49:37 +0000 (0:00:00.139) 0:04:28.538 ******** 2026-03-28 00:50:14.705385 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-03-28 00:50:14.705394 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-03-28 00:50:14.705401 | orchestrator | 2026-03-28 00:50:14.705409 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-03-28 00:50:14.705417 | orchestrator | Saturday 28 March 2026 00:49:40 +0000 (0:00:02.491) 0:04:31.030 ******** 2026-03-28 00:50:14.705425 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:50:14.705433 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:50:14.705441 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:50:14.705449 | orchestrator | 2026-03-28 00:50:14.705562 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-03-28 00:50:14.705587 | orchestrator | Saturday 28 March 2026 00:49:40 +0000 (0:00:00.551) 0:04:31.582 ******** 2026-03-28 00:50:14.705595 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:50:14.705603 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:50:14.705611 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:50:14.705618 | orchestrator | 2026-03-28 00:50:14.705626 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-03-28 00:50:14.705634 | orchestrator | 2026-03-28 00:50:14.705642 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-03-28 00:50:14.705649 | orchestrator | Saturday 28 March 2026 00:49:41 +0000 (0:00:01.241) 0:04:32.823 ******** 2026-03-28 00:50:14.705657 | orchestrator | ok: [testbed-manager] 2026-03-28 00:50:14.705664 | orchestrator | 2026-03-28 00:50:14.705671 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-03-28 00:50:14.705677 | orchestrator | Saturday 28 March 2026 00:49:42 +0000 (0:00:00.161) 0:04:32.984 ******** 2026-03-28 00:50:14.705684 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-03-28 00:50:14.705695 | orchestrator | 2026-03-28 00:50:14.705702 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-03-28 00:50:14.705708 | orchestrator | Saturday 28 March 2026 00:49:42 +0000 (0:00:00.569) 0:04:33.554 ******** 2026-03-28 00:50:14.705715 | orchestrator | changed: [testbed-manager] 2026-03-28 00:50:14.705721 | orchestrator | 2026-03-28 00:50:14.705728 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-03-28 00:50:14.705734 | orchestrator | 2026-03-28 00:50:14.705741 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-03-28 00:50:14.705747 | orchestrator | Saturday 28 March 2026 00:49:49 +0000 (0:00:06.438) 0:04:39.992 ******** 2026-03-28 00:50:14.705754 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:50:14.705760 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:50:14.705767 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:50:14.705774 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:50:14.705780 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:50:14.705787 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:50:14.705793 | orchestrator | 2026-03-28 00:50:14.705800 | orchestrator | TASK [Manage labels] *********************************************************** 2026-03-28 00:50:14.705807 | orchestrator | Saturday 28 March 2026 00:49:50 +0000 (0:00:01.107) 0:04:41.100 ******** 2026-03-28 00:50:14.705813 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-28 00:50:14.705820 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-28 00:50:14.705826 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-28 00:50:14.705833 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-28 00:50:14.705839 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-28 00:50:14.705850 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-28 00:50:14.705856 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-28 00:50:14.705863 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-28 00:50:14.705870 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-28 00:50:14.705876 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-28 00:50:14.705883 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-28 00:50:14.705889 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-28 00:50:14.705902 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-28 00:50:14.705909 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-28 00:50:14.705916 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-28 00:50:14.705922 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-28 00:50:14.705929 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-28 00:50:14.705935 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-28 00:50:14.705942 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-28 00:50:14.705949 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-28 00:50:14.705955 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-28 00:50:14.705961 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-28 00:50:14.705968 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-28 00:50:14.705979 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-28 00:50:14.705986 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-28 00:50:14.705992 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-28 00:50:14.705999 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-28 00:50:14.706005 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-28 00:50:14.706012 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-28 00:50:14.706048 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-28 00:50:14.706055 | orchestrator | 2026-03-28 00:50:14.706061 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-03-28 00:50:14.706068 | orchestrator | Saturday 28 March 2026 00:50:10 +0000 (0:00:20.580) 0:05:01.681 ******** 2026-03-28 00:50:14.706075 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:50:14.706081 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:50:14.706100 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:50:14.706107 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:50:14.706113 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:50:14.706120 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:50:14.706127 | orchestrator | 2026-03-28 00:50:14.706133 | orchestrator | TASK [Manage taints] *********************************************************** 2026-03-28 00:50:14.706140 | orchestrator | Saturday 28 March 2026 00:50:11 +0000 (0:00:00.649) 0:05:02.331 ******** 2026-03-28 00:50:14.706146 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:50:14.706153 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:50:14.706160 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:50:14.706166 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:50:14.706172 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:50:14.706179 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:50:14.706186 | orchestrator | 2026-03-28 00:50:14.706192 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:50:14.706199 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:50:14.706208 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-28 00:50:14.706215 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-28 00:50:14.706222 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-28 00:50:14.706228 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-28 00:50:14.706235 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-28 00:50:14.706245 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-28 00:50:14.706252 | orchestrator | 2026-03-28 00:50:14.706259 | orchestrator | 2026-03-28 00:50:14.706265 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:50:14.706272 | orchestrator | Saturday 28 March 2026 00:50:12 +0000 (0:00:00.835) 0:05:03.166 ******** 2026-03-28 00:50:14.706278 | orchestrator | =============================================================================== 2026-03-28 00:50:14.706285 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 44.45s 2026-03-28 00:50:14.706297 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 43.55s 2026-03-28 00:50:14.706304 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 24.53s 2026-03-28 00:50:14.706315 | orchestrator | Manage labels ---------------------------------------------------------- 20.58s 2026-03-28 00:50:14.706322 | orchestrator | kubectl : Install required packages ------------------------------------ 16.82s 2026-03-28 00:50:14.706329 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 11.58s 2026-03-28 00:50:14.706335 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 9.72s 2026-03-28 00:50:14.706342 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 6.51s 2026-03-28 00:50:14.706348 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 6.44s 2026-03-28 00:50:14.706355 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 6.38s 2026-03-28 00:50:14.706362 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 3.55s 2026-03-28 00:50:14.706368 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.38s 2026-03-28 00:50:14.706375 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 3.35s 2026-03-28 00:50:14.706382 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 3.06s 2026-03-28 00:50:14.706388 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.99s 2026-03-28 00:50:14.706395 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.89s 2026-03-28 00:50:14.706401 | orchestrator | k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured --- 2.78s 2026-03-28 00:50:14.706408 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.49s 2026-03-28 00:50:14.706415 | orchestrator | k3s_server : Download vip rbac manifest to first master ----------------- 2.31s 2026-03-28 00:50:14.706421 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 2.27s 2026-03-28 00:50:14.706428 | orchestrator | 2026-03-28 00:50:14 | INFO  | Task 4668945b-206c-4f42-99a8-0bee3102db73 is in state STARTED 2026-03-28 00:50:14.706435 | orchestrator | 2026-03-28 00:50:14 | INFO  | Task 20a9b5f0-bb08-47c8-8b64-8a279b125006 is in state STARTED 2026-03-28 00:50:14.706442 | orchestrator | 2026-03-28 00:50:14 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:50:14.706448 | orchestrator | 2026-03-28 00:50:14 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:50:17.753551 | orchestrator | 2026-03-28 00:50:17 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:50:17.754609 | orchestrator | 2026-03-28 00:50:17 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:50:17.756995 | orchestrator | 2026-03-28 00:50:17 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:50:17.759132 | orchestrator | 2026-03-28 00:50:17 | INFO  | Task 4668945b-206c-4f42-99a8-0bee3102db73 is in state STARTED 2026-03-28 00:50:17.760480 | orchestrator | 2026-03-28 00:50:17 | INFO  | Task 20a9b5f0-bb08-47c8-8b64-8a279b125006 is in state STARTED 2026-03-28 00:50:17.761977 | orchestrator | 2026-03-28 00:50:17 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:50:17.762046 | orchestrator | 2026-03-28 00:50:17 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:50:20.866271 | orchestrator | 2026-03-28 00:50:20 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:50:20.866376 | orchestrator | 2026-03-28 00:50:20 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:50:20.874855 | orchestrator | 2026-03-28 00:50:20 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:50:20.874939 | orchestrator | 2026-03-28 00:50:20 | INFO  | Task 4668945b-206c-4f42-99a8-0bee3102db73 is in state STARTED 2026-03-28 00:50:20.874953 | orchestrator | 2026-03-28 00:50:20 | INFO  | Task 20a9b5f0-bb08-47c8-8b64-8a279b125006 is in state STARTED 2026-03-28 00:50:20.874964 | orchestrator | 2026-03-28 00:50:20 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:50:20.874999 | orchestrator | 2026-03-28 00:50:20 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:50:24.058549 | orchestrator | 2026-03-28 00:50:24 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:50:24.058927 | orchestrator | 2026-03-28 00:50:24 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:50:24.061278 | orchestrator | 2026-03-28 00:50:24 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:50:24.071718 | orchestrator | 2026-03-28 00:50:24 | INFO  | Task 4668945b-206c-4f42-99a8-0bee3102db73 is in state SUCCESS 2026-03-28 00:50:24.073056 | orchestrator | 2026-03-28 00:50:24 | INFO  | Task 20a9b5f0-bb08-47c8-8b64-8a279b125006 is in state STARTED 2026-03-28 00:50:24.074790 | orchestrator | 2026-03-28 00:50:24 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:50:24.074842 | orchestrator | 2026-03-28 00:50:24 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:50:27.152178 | orchestrator | 2026-03-28 00:50:27 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:50:27.157683 | orchestrator | 2026-03-28 00:50:27 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:50:27.157809 | orchestrator | 2026-03-28 00:50:27 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:50:27.159935 | orchestrator | 2026-03-28 00:50:27 | INFO  | Task 20a9b5f0-bb08-47c8-8b64-8a279b125006 is in state STARTED 2026-03-28 00:50:27.161350 | orchestrator | 2026-03-28 00:50:27 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:50:27.161379 | orchestrator | 2026-03-28 00:50:27 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:50:30.207808 | orchestrator | 2026-03-28 00:50:30 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:50:30.210168 | orchestrator | 2026-03-28 00:50:30 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:50:30.211604 | orchestrator | 2026-03-28 00:50:30 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:50:30.212583 | orchestrator | 2026-03-28 00:50:30 | INFO  | Task 20a9b5f0-bb08-47c8-8b64-8a279b125006 is in state SUCCESS 2026-03-28 00:50:30.218173 | orchestrator | 2026-03-28 00:50:30 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:50:30.218252 | orchestrator | 2026-03-28 00:50:30 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:50:33.274823 | orchestrator | 2026-03-28 00:50:33 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:50:33.277633 | orchestrator | 2026-03-28 00:50:33 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:50:33.280420 | orchestrator | 2026-03-28 00:50:33 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:50:33.281374 | orchestrator | 2026-03-28 00:50:33 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:50:33.282321 | orchestrator | 2026-03-28 00:50:33 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:50:36.336149 | orchestrator | 2026-03-28 00:50:36 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:50:36.336238 | orchestrator | 2026-03-28 00:50:36 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:50:36.336248 | orchestrator | 2026-03-28 00:50:36 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:50:36.336257 | orchestrator | 2026-03-28 00:50:36 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:50:36.336266 | orchestrator | 2026-03-28 00:50:36 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:50:39.571785 | orchestrator | 2026-03-28 00:50:39 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:50:39.571865 | orchestrator | 2026-03-28 00:50:39 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:50:39.571874 | orchestrator | 2026-03-28 00:50:39 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:50:39.571880 | orchestrator | 2026-03-28 00:50:39 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:50:39.571887 | orchestrator | 2026-03-28 00:50:39 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:50:42.626224 | orchestrator | 2026-03-28 00:50:42 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:50:42.628182 | orchestrator | 2026-03-28 00:50:42 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:50:42.629163 | orchestrator | 2026-03-28 00:50:42 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:50:42.630524 | orchestrator | 2026-03-28 00:50:42 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:50:42.630604 | orchestrator | 2026-03-28 00:50:42 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:50:45.661782 | orchestrator | 2026-03-28 00:50:45 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:50:45.662246 | orchestrator | 2026-03-28 00:50:45 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:50:45.662978 | orchestrator | 2026-03-28 00:50:45 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:50:45.663953 | orchestrator | 2026-03-28 00:50:45 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:50:45.663981 | orchestrator | 2026-03-28 00:50:45 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:50:48.689321 | orchestrator | 2026-03-28 00:50:48 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:50:48.690081 | orchestrator | 2026-03-28 00:50:48 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:50:48.690116 | orchestrator | 2026-03-28 00:50:48 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:50:48.690567 | orchestrator | 2026-03-28 00:50:48 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:50:48.690602 | orchestrator | 2026-03-28 00:50:48 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:50:51.722791 | orchestrator | 2026-03-28 00:50:51 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:50:51.724254 | orchestrator | 2026-03-28 00:50:51 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:50:51.726006 | orchestrator | 2026-03-28 00:50:51 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:50:51.727596 | orchestrator | 2026-03-28 00:50:51 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:50:51.727783 | orchestrator | 2026-03-28 00:50:51 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:50:54.774792 | orchestrator | 2026-03-28 00:50:54 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:50:54.776558 | orchestrator | 2026-03-28 00:50:54 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:50:54.779875 | orchestrator | 2026-03-28 00:50:54 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:50:54.780927 | orchestrator | 2026-03-28 00:50:54 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:50:54.781324 | orchestrator | 2026-03-28 00:50:54 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:50:57.816356 | orchestrator | 2026-03-28 00:50:57 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:50:57.817567 | orchestrator | 2026-03-28 00:50:57 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:50:57.818338 | orchestrator | 2026-03-28 00:50:57 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:50:57.819766 | orchestrator | 2026-03-28 00:50:57 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:50:57.819817 | orchestrator | 2026-03-28 00:50:57 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:51:00.863787 | orchestrator | 2026-03-28 00:51:00 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:51:00.866387 | orchestrator | 2026-03-28 00:51:00 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:51:00.866469 | orchestrator | 2026-03-28 00:51:00 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:51:00.867092 | orchestrator | 2026-03-28 00:51:00 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:51:00.867130 | orchestrator | 2026-03-28 00:51:00 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:51:03.905552 | orchestrator | 2026-03-28 00:51:03 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:51:03.906245 | orchestrator | 2026-03-28 00:51:03 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:51:03.906958 | orchestrator | 2026-03-28 00:51:03 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:51:03.907858 | orchestrator | 2026-03-28 00:51:03 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:51:03.907916 | orchestrator | 2026-03-28 00:51:03 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:51:06.940544 | orchestrator | 2026-03-28 00:51:06 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:51:06.940884 | orchestrator | 2026-03-28 00:51:06 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:51:06.943967 | orchestrator | 2026-03-28 00:51:06 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:51:06.944378 | orchestrator | 2026-03-28 00:51:06 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:51:06.944402 | orchestrator | 2026-03-28 00:51:06 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:51:09.977351 | orchestrator | 2026-03-28 00:51:09 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:51:09.981682 | orchestrator | 2026-03-28 00:51:09 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:51:09.983437 | orchestrator | 2026-03-28 00:51:09 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:51:09.985039 | orchestrator | 2026-03-28 00:51:09 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:51:09.985151 | orchestrator | 2026-03-28 00:51:09 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:51:13.051733 | orchestrator | 2026-03-28 00:51:13 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:51:13.051842 | orchestrator | 2026-03-28 00:51:13 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:51:13.051858 | orchestrator | 2026-03-28 00:51:13 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:51:13.051870 | orchestrator | 2026-03-28 00:51:13 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:51:13.051880 | orchestrator | 2026-03-28 00:51:13 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:51:16.086589 | orchestrator | 2026-03-28 00:51:16 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:51:16.088226 | orchestrator | 2026-03-28 00:51:16 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:51:16.090192 | orchestrator | 2026-03-28 00:51:16 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:51:16.092288 | orchestrator | 2026-03-28 00:51:16 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:51:16.092464 | orchestrator | 2026-03-28 00:51:16 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:51:19.128130 | orchestrator | 2026-03-28 00:51:19 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:51:19.129882 | orchestrator | 2026-03-28 00:51:19 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:51:19.131723 | orchestrator | 2026-03-28 00:51:19 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:51:19.135123 | orchestrator | 2026-03-28 00:51:19 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:51:19.135156 | orchestrator | 2026-03-28 00:51:19 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:51:22.201202 | orchestrator | 2026-03-28 00:51:22 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:51:22.202538 | orchestrator | 2026-03-28 00:51:22 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:51:22.204180 | orchestrator | 2026-03-28 00:51:22 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:51:22.205477 | orchestrator | 2026-03-28 00:51:22 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:51:22.205527 | orchestrator | 2026-03-28 00:51:22 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:51:25.271685 | orchestrator | 2026-03-28 00:51:25 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:51:25.271780 | orchestrator | 2026-03-28 00:51:25 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:51:25.271813 | orchestrator | 2026-03-28 00:51:25 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:51:25.271825 | orchestrator | 2026-03-28 00:51:25 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:51:25.271861 | orchestrator | 2026-03-28 00:51:25 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:51:28.311684 | orchestrator | 2026-03-28 00:51:28 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:51:28.311765 | orchestrator | 2026-03-28 00:51:28 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:51:28.311775 | orchestrator | 2026-03-28 00:51:28 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:51:28.311784 | orchestrator | 2026-03-28 00:51:28 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:51:28.311791 | orchestrator | 2026-03-28 00:51:28 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:51:31.328969 | orchestrator | 2026-03-28 00:51:31 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:51:31.329503 | orchestrator | 2026-03-28 00:51:31 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:51:31.330755 | orchestrator | 2026-03-28 00:51:31 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:51:31.331380 | orchestrator | 2026-03-28 00:51:31 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:51:31.331430 | orchestrator | 2026-03-28 00:51:31 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:51:34.360376 | orchestrator | 2026-03-28 00:51:34 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:51:34.360813 | orchestrator | 2026-03-28 00:51:34 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:51:34.361777 | orchestrator | 2026-03-28 00:51:34 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:51:34.364197 | orchestrator | 2026-03-28 00:51:34 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:51:34.364248 | orchestrator | 2026-03-28 00:51:34 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:51:37.397206 | orchestrator | 2026-03-28 00:51:37 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:51:37.398324 | orchestrator | 2026-03-28 00:51:37 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:51:37.399916 | orchestrator | 2026-03-28 00:51:37 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:51:37.401750 | orchestrator | 2026-03-28 00:51:37 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:51:37.401810 | orchestrator | 2026-03-28 00:51:37 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:51:40.438398 | orchestrator | 2026-03-28 00:51:40 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:51:40.438879 | orchestrator | 2026-03-28 00:51:40 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:51:40.439619 | orchestrator | 2026-03-28 00:51:40 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:51:40.440504 | orchestrator | 2026-03-28 00:51:40 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:51:40.440537 | orchestrator | 2026-03-28 00:51:40 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:51:43.475449 | orchestrator | 2026-03-28 00:51:43 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:51:43.476399 | orchestrator | 2026-03-28 00:51:43 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:51:43.477363 | orchestrator | 2026-03-28 00:51:43 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:51:43.478534 | orchestrator | 2026-03-28 00:51:43 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:51:43.478580 | orchestrator | 2026-03-28 00:51:43 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:51:46.511384 | orchestrator | 2026-03-28 00:51:46 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:51:46.511639 | orchestrator | 2026-03-28 00:51:46 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:51:46.512736 | orchestrator | 2026-03-28 00:51:46 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:51:46.513539 | orchestrator | 2026-03-28 00:51:46 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:51:46.513585 | orchestrator | 2026-03-28 00:51:46 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:51:49.563364 | orchestrator | 2026-03-28 00:51:49 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:51:49.563628 | orchestrator | 2026-03-28 00:51:49 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:51:49.564677 | orchestrator | 2026-03-28 00:51:49 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:51:49.565784 | orchestrator | 2026-03-28 00:51:49 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:51:49.565921 | orchestrator | 2026-03-28 00:51:49 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:51:52.641330 | orchestrator | 2026-03-28 00:51:52 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:51:52.649816 | orchestrator | 2026-03-28 00:51:52 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:51:52.655935 | orchestrator | 2026-03-28 00:51:52 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:51:52.689518 | orchestrator | 2026-03-28 00:51:52 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:51:52.689594 | orchestrator | 2026-03-28 00:51:52 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:51:55.719561 | orchestrator | 2026-03-28 00:51:55 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:51:55.720105 | orchestrator | 2026-03-28 00:51:55 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:51:55.721044 | orchestrator | 2026-03-28 00:51:55 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:51:55.721944 | orchestrator | 2026-03-28 00:51:55 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:51:55.722104 | orchestrator | 2026-03-28 00:51:55 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:51:58.758164 | orchestrator | 2026-03-28 00:51:58 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:51:58.758352 | orchestrator | 2026-03-28 00:51:58 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:51:58.759183 | orchestrator | 2026-03-28 00:51:58 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:51:58.760284 | orchestrator | 2026-03-28 00:51:58 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:51:58.760371 | orchestrator | 2026-03-28 00:51:58 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:52:01.801839 | orchestrator | 2026-03-28 00:52:01 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:52:01.804014 | orchestrator | 2026-03-28 00:52:01 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:52:01.805494 | orchestrator | 2026-03-28 00:52:01 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:52:01.807490 | orchestrator | 2026-03-28 00:52:01 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:52:01.807854 | orchestrator | 2026-03-28 00:52:01 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:52:04.850455 | orchestrator | 2026-03-28 00:52:04 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:52:04.854923 | orchestrator | 2026-03-28 00:52:04 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:52:04.855674 | orchestrator | 2026-03-28 00:52:04 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:52:04.856733 | orchestrator | 2026-03-28 00:52:04 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:52:04.856774 | orchestrator | 2026-03-28 00:52:04 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:52:07.923913 | orchestrator | 2026-03-28 00:52:07 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:52:07.926713 | orchestrator | 2026-03-28 00:52:07 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:52:07.927630 | orchestrator | 2026-03-28 00:52:07 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state STARTED 2026-03-28 00:52:07.928435 | orchestrator | 2026-03-28 00:52:07 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:52:07.928518 | orchestrator | 2026-03-28 00:52:07 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:52:10.965434 | orchestrator | 2026-03-28 00:52:10 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:52:10.967644 | orchestrator | 2026-03-28 00:52:10 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:52:10.972709 | orchestrator | 2026-03-28 00:52:10.972797 | orchestrator | 2026-03-28 00:52:10.972809 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-03-28 00:52:10.972819 | orchestrator | 2026-03-28 00:52:10.972829 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-28 00:52:10.972838 | orchestrator | Saturday 28 March 2026 00:50:17 +0000 (0:00:00.461) 0:00:00.461 ******** 2026-03-28 00:52:10.972847 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-28 00:52:10.972857 | orchestrator | 2026-03-28 00:52:10.972866 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-28 00:52:10.972874 | orchestrator | Saturday 28 March 2026 00:50:18 +0000 (0:00:01.264) 0:00:01.726 ******** 2026-03-28 00:52:10.972883 | orchestrator | changed: [testbed-manager] 2026-03-28 00:52:10.972892 | orchestrator | 2026-03-28 00:52:10.972901 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-03-28 00:52:10.972910 | orchestrator | Saturday 28 March 2026 00:50:20 +0000 (0:00:02.169) 0:00:03.895 ******** 2026-03-28 00:52:10.972919 | orchestrator | changed: [testbed-manager] 2026-03-28 00:52:10.972927 | orchestrator | 2026-03-28 00:52:10.972936 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:52:10.973020 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:52:10.973031 | orchestrator | 2026-03-28 00:52:10.973039 | orchestrator | 2026-03-28 00:52:10.973048 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:52:10.973056 | orchestrator | Saturday 28 March 2026 00:50:21 +0000 (0:00:00.862) 0:00:04.757 ******** 2026-03-28 00:52:10.973065 | orchestrator | =============================================================================== 2026-03-28 00:52:10.973097 | orchestrator | Write kubeconfig file --------------------------------------------------- 2.17s 2026-03-28 00:52:10.973106 | orchestrator | Get kubeconfig file ----------------------------------------------------- 1.26s 2026-03-28 00:52:10.973115 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.86s 2026-03-28 00:52:10.973123 | orchestrator | 2026-03-28 00:52:10.973131 | orchestrator | 2026-03-28 00:52:10.973140 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-28 00:52:10.973148 | orchestrator | 2026-03-28 00:52:10.973157 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-28 00:52:10.973165 | orchestrator | Saturday 28 March 2026 00:50:17 +0000 (0:00:00.349) 0:00:00.349 ******** 2026-03-28 00:52:10.973173 | orchestrator | ok: [testbed-manager] 2026-03-28 00:52:10.973183 | orchestrator | 2026-03-28 00:52:10.973191 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-28 00:52:10.973200 | orchestrator | Saturday 28 March 2026 00:50:18 +0000 (0:00:01.109) 0:00:01.459 ******** 2026-03-28 00:52:10.973208 | orchestrator | ok: [testbed-manager] 2026-03-28 00:52:10.973216 | orchestrator | 2026-03-28 00:52:10.973225 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-28 00:52:10.973234 | orchestrator | Saturday 28 March 2026 00:50:19 +0000 (0:00:00.894) 0:00:02.353 ******** 2026-03-28 00:52:10.973242 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-28 00:52:10.973250 | orchestrator | 2026-03-28 00:52:10.973259 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-28 00:52:10.973267 | orchestrator | Saturday 28 March 2026 00:50:20 +0000 (0:00:01.356) 0:00:03.709 ******** 2026-03-28 00:52:10.973276 | orchestrator | changed: [testbed-manager] 2026-03-28 00:52:10.973284 | orchestrator | 2026-03-28 00:52:10.973295 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-28 00:52:10.973304 | orchestrator | Saturday 28 March 2026 00:50:22 +0000 (0:00:01.811) 0:00:05.521 ******** 2026-03-28 00:52:10.973314 | orchestrator | changed: [testbed-manager] 2026-03-28 00:52:10.973324 | orchestrator | 2026-03-28 00:52:10.973333 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-28 00:52:10.973343 | orchestrator | Saturday 28 March 2026 00:50:22 +0000 (0:00:00.561) 0:00:06.083 ******** 2026-03-28 00:52:10.973352 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-28 00:52:10.973362 | orchestrator | 2026-03-28 00:52:10.973372 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-28 00:52:10.973381 | orchestrator | Saturday 28 March 2026 00:50:24 +0000 (0:00:01.938) 0:00:08.022 ******** 2026-03-28 00:52:10.973391 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-28 00:52:10.973401 | orchestrator | 2026-03-28 00:52:10.973410 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-28 00:52:10.973420 | orchestrator | Saturday 28 March 2026 00:50:25 +0000 (0:00:01.083) 0:00:09.105 ******** 2026-03-28 00:52:10.973430 | orchestrator | ok: [testbed-manager] 2026-03-28 00:52:10.973440 | orchestrator | 2026-03-28 00:52:10.973449 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-28 00:52:10.973457 | orchestrator | Saturday 28 March 2026 00:50:26 +0000 (0:00:00.761) 0:00:09.867 ******** 2026-03-28 00:52:10.973466 | orchestrator | ok: [testbed-manager] 2026-03-28 00:52:10.973474 | orchestrator | 2026-03-28 00:52:10.973482 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:52:10.973491 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:52:10.973500 | orchestrator | 2026-03-28 00:52:10.973508 | orchestrator | 2026-03-28 00:52:10.973529 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:52:10.973538 | orchestrator | Saturday 28 March 2026 00:50:27 +0000 (0:00:00.391) 0:00:10.259 ******** 2026-03-28 00:52:10.973547 | orchestrator | =============================================================================== 2026-03-28 00:52:10.973561 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.94s 2026-03-28 00:52:10.973569 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.81s 2026-03-28 00:52:10.973578 | orchestrator | Get kubeconfig file ----------------------------------------------------- 1.36s 2026-03-28 00:52:10.973602 | orchestrator | Get home directory of operator user ------------------------------------- 1.11s 2026-03-28 00:52:10.973611 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 1.08s 2026-03-28 00:52:10.973619 | orchestrator | Create .kube directory -------------------------------------------------- 0.89s 2026-03-28 00:52:10.973628 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.76s 2026-03-28 00:52:10.973636 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.56s 2026-03-28 00:52:10.973645 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.39s 2026-03-28 00:52:10.973653 | orchestrator | 2026-03-28 00:52:10.973662 | orchestrator | 2026-03-28 00:52:10.973670 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-03-28 00:52:10.973679 | orchestrator | 2026-03-28 00:52:10.973687 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-03-28 00:52:10.973696 | orchestrator | Saturday 28 March 2026 00:48:39 +0000 (0:00:00.414) 0:00:00.414 ******** 2026-03-28 00:52:10.973704 | orchestrator | ok: [localhost] => { 2026-03-28 00:52:10.973713 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-03-28 00:52:10.973723 | orchestrator | } 2026-03-28 00:52:10.973731 | orchestrator | 2026-03-28 00:52:10.973744 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-03-28 00:52:10.973758 | orchestrator | Saturday 28 March 2026 00:48:39 +0000 (0:00:00.148) 0:00:00.564 ******** 2026-03-28 00:52:10.973774 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-03-28 00:52:10.973789 | orchestrator | ...ignoring 2026-03-28 00:52:10.973802 | orchestrator | 2026-03-28 00:52:10.973816 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-03-28 00:52:10.973829 | orchestrator | Saturday 28 March 2026 00:48:43 +0000 (0:00:04.303) 0:00:04.867 ******** 2026-03-28 00:52:10.973842 | orchestrator | skipping: [localhost] 2026-03-28 00:52:10.973855 | orchestrator | 2026-03-28 00:52:10.973868 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-03-28 00:52:10.973881 | orchestrator | Saturday 28 March 2026 00:48:43 +0000 (0:00:00.133) 0:00:05.000 ******** 2026-03-28 00:52:10.973895 | orchestrator | ok: [localhost] 2026-03-28 00:52:10.973909 | orchestrator | 2026-03-28 00:52:10.973923 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 00:52:10.973961 | orchestrator | 2026-03-28 00:52:10.973977 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 00:52:10.973993 | orchestrator | Saturday 28 March 2026 00:48:45 +0000 (0:00:01.476) 0:00:06.477 ******** 2026-03-28 00:52:10.974003 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:52:10.974011 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:52:10.974109 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:52:10.974119 | orchestrator | 2026-03-28 00:52:10.974128 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 00:52:10.974137 | orchestrator | Saturday 28 March 2026 00:48:45 +0000 (0:00:00.560) 0:00:07.037 ******** 2026-03-28 00:52:10.974145 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-03-28 00:52:10.974154 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-03-28 00:52:10.974163 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-03-28 00:52:10.974172 | orchestrator | 2026-03-28 00:52:10.974180 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-03-28 00:52:10.974201 | orchestrator | 2026-03-28 00:52:10.974210 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-28 00:52:10.974219 | orchestrator | Saturday 28 March 2026 00:48:46 +0000 (0:00:01.141) 0:00:08.178 ******** 2026-03-28 00:52:10.974249 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:52:10.974259 | orchestrator | 2026-03-28 00:52:10.974268 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-28 00:52:10.974276 | orchestrator | Saturday 28 March 2026 00:48:48 +0000 (0:00:01.442) 0:00:09.620 ******** 2026-03-28 00:52:10.974285 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:52:10.974293 | orchestrator | 2026-03-28 00:52:10.974302 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-03-28 00:52:10.974311 | orchestrator | Saturday 28 March 2026 00:48:50 +0000 (0:00:01.972) 0:00:11.593 ******** 2026-03-28 00:52:10.974319 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:10.974328 | orchestrator | 2026-03-28 00:52:10.974336 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-03-28 00:52:10.974345 | orchestrator | Saturday 28 March 2026 00:48:51 +0000 (0:00:00.850) 0:00:12.444 ******** 2026-03-28 00:52:10.974354 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:10.974362 | orchestrator | 2026-03-28 00:52:10.974371 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-03-28 00:52:10.974379 | orchestrator | Saturday 28 March 2026 00:48:51 +0000 (0:00:00.553) 0:00:12.998 ******** 2026-03-28 00:52:10.974388 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:10.974396 | orchestrator | 2026-03-28 00:52:10.974405 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-03-28 00:52:10.974414 | orchestrator | Saturday 28 March 2026 00:48:52 +0000 (0:00:00.814) 0:00:13.812 ******** 2026-03-28 00:52:10.974429 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:10.974438 | orchestrator | 2026-03-28 00:52:10.974446 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-28 00:52:10.974455 | orchestrator | Saturday 28 March 2026 00:48:53 +0000 (0:00:01.017) 0:00:14.830 ******** 2026-03-28 00:52:10.974464 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:52:10.974472 | orchestrator | 2026-03-28 00:52:10.974481 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-28 00:52:10.974507 | orchestrator | Saturday 28 March 2026 00:48:54 +0000 (0:00:01.206) 0:00:16.037 ******** 2026-03-28 00:52:10.974517 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:52:10.974525 | orchestrator | 2026-03-28 00:52:10.974534 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-03-28 00:52:10.974542 | orchestrator | Saturday 28 March 2026 00:48:55 +0000 (0:00:00.888) 0:00:16.925 ******** 2026-03-28 00:52:10.974551 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:10.974559 | orchestrator | 2026-03-28 00:52:10.974567 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-03-28 00:52:10.974576 | orchestrator | Saturday 28 March 2026 00:48:56 +0000 (0:00:00.797) 0:00:17.723 ******** 2026-03-28 00:52:10.974584 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:10.974593 | orchestrator | 2026-03-28 00:52:10.974601 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-03-28 00:52:10.974610 | orchestrator | Saturday 28 March 2026 00:48:56 +0000 (0:00:00.332) 0:00:18.056 ******** 2026-03-28 00:52:10.974624 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-28 00:52:10.974646 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-28 00:52:10.974662 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-28 00:52:10.974671 | orchestrator | 2026-03-28 00:52:10.974680 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-03-28 00:52:10.974689 | orchestrator | Saturday 28 March 2026 00:48:58 +0000 (0:00:01.594) 0:00:19.651 ******** 2026-03-28 00:52:10.974707 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-28 00:52:10.974717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-28 00:52:10.974733 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-28 00:52:10.974759 | orchestrator | 2026-03-28 00:52:10.974769 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-03-28 00:52:10.974778 | orchestrator | Saturday 28 March 2026 00:49:00 +0000 (0:00:01.893) 0:00:21.544 ******** 2026-03-28 00:52:10.974786 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-28 00:52:10.974795 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-28 00:52:10.974804 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-28 00:52:10.974812 | orchestrator | 2026-03-28 00:52:10.974821 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-03-28 00:52:10.974829 | orchestrator | Saturday 28 March 2026 00:49:02 +0000 (0:00:02.520) 0:00:24.065 ******** 2026-03-28 00:52:10.974842 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-28 00:52:10.974851 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-28 00:52:10.974860 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-28 00:52:10.974868 | orchestrator | 2026-03-28 00:52:10.974877 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-03-28 00:52:10.974890 | orchestrator | Saturday 28 March 2026 00:49:05 +0000 (0:00:03.094) 0:00:27.160 ******** 2026-03-28 00:52:10.974899 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-28 00:52:10.974908 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-28 00:52:10.974916 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-28 00:52:10.974925 | orchestrator | 2026-03-28 00:52:10.974933 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-03-28 00:52:10.974976 | orchestrator | Saturday 28 March 2026 00:49:08 +0000 (0:00:02.818) 0:00:29.978 ******** 2026-03-28 00:52:10.974985 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-28 00:52:10.974994 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-28 00:52:10.975003 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-28 00:52:10.975011 | orchestrator | 2026-03-28 00:52:10.975020 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-03-28 00:52:10.975028 | orchestrator | Saturday 28 March 2026 00:49:10 +0000 (0:00:02.085) 0:00:32.064 ******** 2026-03-28 00:52:10.975037 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-28 00:52:10.975045 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-28 00:52:10.975054 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-28 00:52:10.975063 | orchestrator | 2026-03-28 00:52:10.975071 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-03-28 00:52:10.975080 | orchestrator | Saturday 28 March 2026 00:49:12 +0000 (0:00:01.479) 0:00:33.544 ******** 2026-03-28 00:52:10.975088 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-28 00:52:10.975097 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-28 00:52:10.975111 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-28 00:52:10.975126 | orchestrator | 2026-03-28 00:52:10.975138 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-28 00:52:10.975150 | orchestrator | Saturday 28 March 2026 00:49:14 +0000 (0:00:02.274) 0:00:35.819 ******** 2026-03-28 00:52:10.975163 | orchestrator | included: /ansible/roles/rabbitmq/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:52:10.975177 | orchestrator | 2026-03-28 00:52:10.975192 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over extra CA certificates] ******* 2026-03-28 00:52:10.975203 | orchestrator | Saturday 28 March 2026 00:49:15 +0000 (0:00:01.044) 0:00:36.863 ******** 2026-03-28 00:52:10.975213 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-28 00:52:10.975236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-28 00:52:10.975254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-28 00:52:10.975264 | orchestrator | 2026-03-28 00:52:10.975273 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS certificate] *** 2026-03-28 00:52:10.975282 | orchestrator | Saturday 28 March 2026 00:49:17 +0000 (0:00:02.091) 0:00:38.954 ******** 2026-03-28 00:52:10.975291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-28 00:52:10.975301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-28 00:52:10.975310 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:10.975325 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:52:10.975350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-28 00:52:10.975360 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:52:10.975369 | orchestrator | 2026-03-28 00:52:10.975378 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS key] **** 2026-03-28 00:52:10.975387 | orchestrator | Saturday 28 March 2026 00:49:18 +0000 (0:00:00.465) 0:00:39.420 ******** 2026-03-28 00:52:10.975396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-28 00:52:10.975405 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:10.975414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-28 00:52:10.975424 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:52:10.975433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-28 00:52:10.975449 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:52:10.975458 | orchestrator | 2026-03-28 00:52:10.975467 | orchestrator | TASK [service-check-containers : rabbitmq | Check containers] ****************** 2026-03-28 00:52:10.975480 | orchestrator | Saturday 28 March 2026 00:49:20 +0000 (0:00:02.338) 0:00:41.758 ******** 2026-03-28 00:52:10.975557 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-28 00:52:10.975579 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-28 00:52:10.975589 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-28 00:52:10.975606 | orchestrator | 2026-03-28 00:52:10.975615 | orchestrator | TASK [service-check-containers : rabbitmq | Notify handlers to restart containers] *** 2026-03-28 00:52:10.975628 | orchestrator | Saturday 28 March 2026 00:49:21 +0000 (0:00:01.247) 0:00:43.006 ******** 2026-03-28 00:52:10.975642 | orchestrator | changed: [testbed-node-0] => { 2026-03-28 00:52:10.975658 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 00:52:10.975672 | orchestrator | } 2026-03-28 00:52:10.975687 | orchestrator | changed: [testbed-node-1] => { 2026-03-28 00:52:10.975701 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 00:52:10.975713 | orchestrator | } 2026-03-28 00:52:10.975727 | orchestrator | changed: [testbed-node-2] => { 2026-03-28 00:52:10.975746 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 00:52:10.975760 | orchestrator | } 2026-03-28 00:52:10.975774 | orchestrator | 2026-03-28 00:52:10.975788 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-28 00:52:10.975803 | orchestrator | Saturday 28 March 2026 00:49:22 +0000 (0:00:00.575) 0:00:43.581 ******** 2026-03-28 00:52:10.975831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-28 00:52:10.975848 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:10.975863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-28 00:52:10.975873 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:52:10.975883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-28 00:52:10.975900 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:52:10.975910 | orchestrator | 2026-03-28 00:52:10.975918 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-03-28 00:52:10.975927 | orchestrator | Saturday 28 March 2026 00:49:23 +0000 (0:00:00.930) 0:00:44.512 ******** 2026-03-28 00:52:10.975935 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:52:10.975970 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:52:10.975979 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:52:10.975988 | orchestrator | 2026-03-28 00:52:10.975996 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-03-28 00:52:10.976005 | orchestrator | Saturday 28 March 2026 00:49:24 +0000 (0:00:00.848) 0:00:45.361 ******** 2026-03-28 00:52:10.976014 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:52:10.976022 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:52:10.976031 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:52:10.976039 | orchestrator | 2026-03-28 00:52:10.976048 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-03-28 00:52:10.976062 | orchestrator | Saturday 28 March 2026 00:49:36 +0000 (0:00:12.000) 0:00:57.362 ******** 2026-03-28 00:52:10.976071 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:52:10.976080 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:52:10.976088 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:52:10.976097 | orchestrator | 2026-03-28 00:52:10.976106 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-28 00:52:10.976115 | orchestrator | 2026-03-28 00:52:10.976123 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-28 00:52:10.976138 | orchestrator | Saturday 28 March 2026 00:49:36 +0000 (0:00:00.496) 0:00:57.858 ******** 2026-03-28 00:52:10.976147 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:52:10.976156 | orchestrator | 2026-03-28 00:52:10.976164 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-28 00:52:10.976172 | orchestrator | Saturday 28 March 2026 00:49:37 +0000 (0:00:00.691) 0:00:58.550 ******** 2026-03-28 00:52:10.976181 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:52:10.976190 | orchestrator | 2026-03-28 00:52:10.976198 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-28 00:52:10.976207 | orchestrator | Saturday 28 March 2026 00:49:37 +0000 (0:00:00.158) 0:00:58.708 ******** 2026-03-28 00:52:10.976215 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:52:10.976223 | orchestrator | 2026-03-28 00:52:10.976232 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-28 00:52:10.976240 | orchestrator | Saturday 28 March 2026 00:49:44 +0000 (0:00:06.968) 0:01:05.677 ******** 2026-03-28 00:52:10.976249 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:52:10.976258 | orchestrator | 2026-03-28 00:52:10.976266 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-28 00:52:10.976275 | orchestrator | 2026-03-28 00:52:10.976283 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-28 00:52:10.976292 | orchestrator | Saturday 28 March 2026 00:51:33 +0000 (0:01:49.468) 0:02:55.145 ******** 2026-03-28 00:52:10.976300 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:52:10.976308 | orchestrator | 2026-03-28 00:52:10.976317 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-28 00:52:10.976326 | orchestrator | Saturday 28 March 2026 00:51:34 +0000 (0:00:00.751) 0:02:55.899 ******** 2026-03-28 00:52:10.976341 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:52:10.976350 | orchestrator | 2026-03-28 00:52:10.976358 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-28 00:52:10.976367 | orchestrator | Saturday 28 March 2026 00:51:34 +0000 (0:00:00.219) 0:02:56.119 ******** 2026-03-28 00:52:10.976375 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:52:10.976384 | orchestrator | 2026-03-28 00:52:10.976393 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-28 00:52:10.976401 | orchestrator | Saturday 28 March 2026 00:51:41 +0000 (0:00:07.177) 0:03:03.296 ******** 2026-03-28 00:52:10.976410 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:52:10.976418 | orchestrator | 2026-03-28 00:52:10.976427 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-28 00:52:10.976435 | orchestrator | 2026-03-28 00:52:10.976444 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-28 00:52:10.976453 | orchestrator | Saturday 28 March 2026 00:51:49 +0000 (0:00:07.987) 0:03:11.284 ******** 2026-03-28 00:52:10.976461 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:52:10.976470 | orchestrator | 2026-03-28 00:52:10.976478 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-28 00:52:10.976487 | orchestrator | Saturday 28 March 2026 00:51:50 +0000 (0:00:00.803) 0:03:12.088 ******** 2026-03-28 00:52:10.976495 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:52:10.976504 | orchestrator | 2026-03-28 00:52:10.976512 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-28 00:52:10.976521 | orchestrator | Saturday 28 March 2026 00:51:50 +0000 (0:00:00.174) 0:03:12.263 ******** 2026-03-28 00:52:10.976530 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:52:10.976538 | orchestrator | 2026-03-28 00:52:10.976547 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-28 00:52:10.976555 | orchestrator | Saturday 28 March 2026 00:51:52 +0000 (0:00:01.988) 0:03:14.252 ******** 2026-03-28 00:52:10.976564 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:52:10.976572 | orchestrator | 2026-03-28 00:52:10.976581 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-03-28 00:52:10.976590 | orchestrator | 2026-03-28 00:52:10.976598 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-03-28 00:52:10.976607 | orchestrator | Saturday 28 March 2026 00:52:04 +0000 (0:00:11.269) 0:03:25.521 ******** 2026-03-28 00:52:10.976616 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:52:10.976624 | orchestrator | 2026-03-28 00:52:10.976633 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-03-28 00:52:10.976641 | orchestrator | Saturday 28 March 2026 00:52:05 +0000 (0:00:00.865) 0:03:26.386 ******** 2026-03-28 00:52:10.976650 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:52:10.976659 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:52:10.976667 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:52:10.976676 | orchestrator | 2026-03-28 00:52:10.976684 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:52:10.976693 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-28 00:52:10.976703 | orchestrator | testbed-node-0 : ok=26  changed=16  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-03-28 00:52:10.976712 | orchestrator | testbed-node-1 : ok=24  changed=16  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-28 00:52:10.976726 | orchestrator | testbed-node-2 : ok=24  changed=16  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-28 00:52:10.976736 | orchestrator | 2026-03-28 00:52:10.976744 | orchestrator | 2026-03-28 00:52:10.976753 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:52:10.976767 | orchestrator | Saturday 28 March 2026 00:52:07 +0000 (0:00:02.943) 0:03:29.330 ******** 2026-03-28 00:52:10.976776 | orchestrator | =============================================================================== 2026-03-28 00:52:10.976785 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------ 128.73s 2026-03-28 00:52:10.976799 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 16.13s 2026-03-28 00:52:10.976808 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------ 12.00s 2026-03-28 00:52:10.976816 | orchestrator | Check RabbitMQ service -------------------------------------------------- 4.30s 2026-03-28 00:52:10.976825 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 3.10s 2026-03-28 00:52:10.976834 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.94s 2026-03-28 00:52:10.976842 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 2.82s 2026-03-28 00:52:10.976851 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.52s 2026-03-28 00:52:10.976859 | orchestrator | service-cert-copy : rabbitmq | Copying over backend internal TLS key ---- 2.34s 2026-03-28 00:52:10.976868 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.27s 2026-03-28 00:52:10.976876 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.25s 2026-03-28 00:52:10.976885 | orchestrator | service-cert-copy : rabbitmq | Copying over extra CA certificates ------- 2.09s 2026-03-28 00:52:10.976893 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.09s 2026-03-28 00:52:10.976902 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.97s 2026-03-28 00:52:10.976910 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.89s 2026-03-28 00:52:10.976919 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.59s 2026-03-28 00:52:10.976927 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.48s 2026-03-28 00:52:10.976936 | orchestrator | Set kolla_action_rabbitmq = kolla_action_ng ----------------------------- 1.48s 2026-03-28 00:52:10.976967 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.44s 2026-03-28 00:52:10.976975 | orchestrator | service-check-containers : rabbitmq | Check containers ------------------ 1.25s 2026-03-28 00:52:10.976985 | orchestrator | 2026-03-28 00:52:10 | INFO  | Task 5cd81820-b692-4af6-b5c5-a14743be2e00 is in state SUCCESS 2026-03-28 00:52:10.976993 | orchestrator | 2026-03-28 00:52:10 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:52:10.977002 | orchestrator | 2026-03-28 00:52:10 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:52:14.042211 | orchestrator | 2026-03-28 00:52:14 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:52:14.043723 | orchestrator | 2026-03-28 00:52:14 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:52:14.044882 | orchestrator | 2026-03-28 00:52:14 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:52:14.044923 | orchestrator | 2026-03-28 00:52:14 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:52:17.081987 | orchestrator | 2026-03-28 00:52:17 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:52:17.083400 | orchestrator | 2026-03-28 00:52:17 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:52:17.085651 | orchestrator | 2026-03-28 00:52:17 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:52:17.085887 | orchestrator | 2026-03-28 00:52:17 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:52:20.126431 | orchestrator | 2026-03-28 00:52:20 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:52:20.126542 | orchestrator | 2026-03-28 00:52:20 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:52:20.128078 | orchestrator | 2026-03-28 00:52:20 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:52:20.128111 | orchestrator | 2026-03-28 00:52:20 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:52:23.225854 | orchestrator | 2026-03-28 00:52:23 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:52:23.226331 | orchestrator | 2026-03-28 00:52:23 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:52:23.227173 | orchestrator | 2026-03-28 00:52:23 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:52:23.227202 | orchestrator | 2026-03-28 00:52:23 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:52:26.269510 | orchestrator | 2026-03-28 00:52:26 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:52:26.269991 | orchestrator | 2026-03-28 00:52:26 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:52:26.271231 | orchestrator | 2026-03-28 00:52:26 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:52:26.271290 | orchestrator | 2026-03-28 00:52:26 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:52:29.312199 | orchestrator | 2026-03-28 00:52:29 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:52:29.314718 | orchestrator | 2026-03-28 00:52:29 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:52:29.317533 | orchestrator | 2026-03-28 00:52:29 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:52:29.317598 | orchestrator | 2026-03-28 00:52:29 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:52:32.357866 | orchestrator | 2026-03-28 00:52:32 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:52:32.360027 | orchestrator | 2026-03-28 00:52:32 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:52:32.360606 | orchestrator | 2026-03-28 00:52:32 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:52:32.360646 | orchestrator | 2026-03-28 00:52:32 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:52:35.386464 | orchestrator | 2026-03-28 00:52:35 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:52:35.386607 | orchestrator | 2026-03-28 00:52:35 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:52:35.389490 | orchestrator | 2026-03-28 00:52:35 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:52:35.389526 | orchestrator | 2026-03-28 00:52:35 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:52:38.437625 | orchestrator | 2026-03-28 00:52:38 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:52:38.438582 | orchestrator | 2026-03-28 00:52:38 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:52:38.439804 | orchestrator | 2026-03-28 00:52:38 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:52:38.439844 | orchestrator | 2026-03-28 00:52:38 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:52:41.484812 | orchestrator | 2026-03-28 00:52:41 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:52:41.487088 | orchestrator | 2026-03-28 00:52:41 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:52:41.492240 | orchestrator | 2026-03-28 00:52:41 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:52:41.492331 | orchestrator | 2026-03-28 00:52:41 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:52:44.532064 | orchestrator | 2026-03-28 00:52:44 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:52:44.533120 | orchestrator | 2026-03-28 00:52:44 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:52:44.534633 | orchestrator | 2026-03-28 00:52:44 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:52:44.534833 | orchestrator | 2026-03-28 00:52:44 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:52:47.566252 | orchestrator | 2026-03-28 00:52:47 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:52:47.566873 | orchestrator | 2026-03-28 00:52:47 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:52:47.569347 | orchestrator | 2026-03-28 00:52:47 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:52:47.569471 | orchestrator | 2026-03-28 00:52:47 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:52:50.601063 | orchestrator | 2026-03-28 00:52:50 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:52:50.602236 | orchestrator | 2026-03-28 00:52:50 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:52:50.603318 | orchestrator | 2026-03-28 00:52:50 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:52:50.603641 | orchestrator | 2026-03-28 00:52:50 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:52:53.646078 | orchestrator | 2026-03-28 00:52:53 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:52:53.646805 | orchestrator | 2026-03-28 00:52:53 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:52:53.648393 | orchestrator | 2026-03-28 00:52:53 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:52:53.648602 | orchestrator | 2026-03-28 00:52:53 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:52:56.692814 | orchestrator | 2026-03-28 00:52:56 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:52:56.692977 | orchestrator | 2026-03-28 00:52:56 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:52:56.692992 | orchestrator | 2026-03-28 00:52:56 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:52:56.693001 | orchestrator | 2026-03-28 00:52:56 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:52:59.733151 | orchestrator | 2026-03-28 00:52:59 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:52:59.733924 | orchestrator | 2026-03-28 00:52:59 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:52:59.734810 | orchestrator | 2026-03-28 00:52:59 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:52:59.734844 | orchestrator | 2026-03-28 00:52:59 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:53:02.775372 | orchestrator | 2026-03-28 00:53:02 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:53:02.776366 | orchestrator | 2026-03-28 00:53:02 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:53:02.778108 | orchestrator | 2026-03-28 00:53:02 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:53:02.778153 | orchestrator | 2026-03-28 00:53:02 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:53:05.815451 | orchestrator | 2026-03-28 00:53:05 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:53:05.816114 | orchestrator | 2026-03-28 00:53:05 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:53:05.817000 | orchestrator | 2026-03-28 00:53:05 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:53:05.817111 | orchestrator | 2026-03-28 00:53:05 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:53:08.858460 | orchestrator | 2026-03-28 00:53:08 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:53:08.859276 | orchestrator | 2026-03-28 00:53:08 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:53:08.860605 | orchestrator | 2026-03-28 00:53:08 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:53:08.860784 | orchestrator | 2026-03-28 00:53:08 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:53:11.899613 | orchestrator | 2026-03-28 00:53:11 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:53:11.906183 | orchestrator | 2026-03-28 00:53:11 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:53:11.907590 | orchestrator | 2026-03-28 00:53:11 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:53:11.907724 | orchestrator | 2026-03-28 00:53:11 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:53:14.953800 | orchestrator | 2026-03-28 00:53:14 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:53:14.958604 | orchestrator | 2026-03-28 00:53:14 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:53:14.960039 | orchestrator | 2026-03-28 00:53:14 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:53:14.960086 | orchestrator | 2026-03-28 00:53:14 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:53:17.990355 | orchestrator | 2026-03-28 00:53:17 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:53:17.990966 | orchestrator | 2026-03-28 00:53:17 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:53:17.991465 | orchestrator | 2026-03-28 00:53:17 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:53:17.991555 | orchestrator | 2026-03-28 00:53:17 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:53:21.049426 | orchestrator | 2026-03-28 00:53:21 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:53:21.050379 | orchestrator | 2026-03-28 00:53:21 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:53:21.051281 | orchestrator | 2026-03-28 00:53:21 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:53:21.051333 | orchestrator | 2026-03-28 00:53:21 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:53:24.126207 | orchestrator | 2026-03-28 00:53:24 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:53:24.127449 | orchestrator | 2026-03-28 00:53:24 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:53:24.129957 | orchestrator | 2026-03-28 00:53:24 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:53:24.130799 | orchestrator | 2026-03-28 00:53:24 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:53:27.188237 | orchestrator | 2026-03-28 00:53:27 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:53:27.190743 | orchestrator | 2026-03-28 00:53:27 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:53:27.192990 | orchestrator | 2026-03-28 00:53:27 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:53:27.193081 | orchestrator | 2026-03-28 00:53:27 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:53:30.230299 | orchestrator | 2026-03-28 00:53:30 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:53:30.231218 | orchestrator | 2026-03-28 00:53:30 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:53:30.232428 | orchestrator | 2026-03-28 00:53:30 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:53:30.232465 | orchestrator | 2026-03-28 00:53:30 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:53:33.269605 | orchestrator | 2026-03-28 00:53:33 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:53:33.269683 | orchestrator | 2026-03-28 00:53:33 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:53:33.271097 | orchestrator | 2026-03-28 00:53:33 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:53:33.271126 | orchestrator | 2026-03-28 00:53:33 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:53:36.304286 | orchestrator | 2026-03-28 00:53:36 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:53:36.306219 | orchestrator | 2026-03-28 00:53:36 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:53:36.306786 | orchestrator | 2026-03-28 00:53:36 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:53:36.306860 | orchestrator | 2026-03-28 00:53:36 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:53:39.342977 | orchestrator | 2026-03-28 00:53:39 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:53:39.343283 | orchestrator | 2026-03-28 00:53:39 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:53:39.345857 | orchestrator | 2026-03-28 00:53:39 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:53:39.345888 | orchestrator | 2026-03-28 00:53:39 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:53:42.390573 | orchestrator | 2026-03-28 00:53:42 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:53:42.391138 | orchestrator | 2026-03-28 00:53:42 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:53:42.392252 | orchestrator | 2026-03-28 00:53:42 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:53:42.394076 | orchestrator | 2026-03-28 00:53:42 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:53:45.428572 | orchestrator | 2026-03-28 00:53:45 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:53:45.429120 | orchestrator | 2026-03-28 00:53:45 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:53:45.430117 | orchestrator | 2026-03-28 00:53:45 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:53:45.430314 | orchestrator | 2026-03-28 00:53:45 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:53:48.465596 | orchestrator | 2026-03-28 00:53:48 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:53:48.465938 | orchestrator | 2026-03-28 00:53:48 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:53:48.466632 | orchestrator | 2026-03-28 00:53:48 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:53:48.466824 | orchestrator | 2026-03-28 00:53:48 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:53:51.535463 | orchestrator | 2026-03-28 00:53:51 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:53:51.539993 | orchestrator | 2026-03-28 00:53:51 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:53:51.544883 | orchestrator | 2026-03-28 00:53:51 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:53:51.544993 | orchestrator | 2026-03-28 00:53:51 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:53:54.579666 | orchestrator | 2026-03-28 00:53:54 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:53:54.580019 | orchestrator | 2026-03-28 00:53:54 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:53:54.581344 | orchestrator | 2026-03-28 00:53:54 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:53:54.581547 | orchestrator | 2026-03-28 00:53:54 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:53:57.638981 | orchestrator | 2026-03-28 00:53:57 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:53:57.641523 | orchestrator | 2026-03-28 00:53:57 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:53:57.644140 | orchestrator | 2026-03-28 00:53:57 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:53:57.644275 | orchestrator | 2026-03-28 00:53:57 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:54:00.679872 | orchestrator | 2026-03-28 00:54:00 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:54:00.680217 | orchestrator | 2026-03-28 00:54:00 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:54:00.681268 | orchestrator | 2026-03-28 00:54:00 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:54:00.681580 | orchestrator | 2026-03-28 00:54:00 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:54:03.720394 | orchestrator | 2026-03-28 00:54:03 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:54:03.720608 | orchestrator | 2026-03-28 00:54:03 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:54:03.721450 | orchestrator | 2026-03-28 00:54:03 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:54:03.721492 | orchestrator | 2026-03-28 00:54:03 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:54:06.778191 | orchestrator | 2026-03-28 00:54:06 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:54:06.778405 | orchestrator | 2026-03-28 00:54:06 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:54:06.779840 | orchestrator | 2026-03-28 00:54:06 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:54:06.779872 | orchestrator | 2026-03-28 00:54:06 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:54:09.814532 | orchestrator | 2026-03-28 00:54:09 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:54:09.816409 | orchestrator | 2026-03-28 00:54:09 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:54:09.818938 | orchestrator | 2026-03-28 00:54:09 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:54:09.819050 | orchestrator | 2026-03-28 00:54:09 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:54:12.867899 | orchestrator | 2026-03-28 00:54:12 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:54:12.870438 | orchestrator | 2026-03-28 00:54:12 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:54:12.873005 | orchestrator | 2026-03-28 00:54:12 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:54:12.873069 | orchestrator | 2026-03-28 00:54:12 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:54:15.915412 | orchestrator | 2026-03-28 00:54:15 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:54:15.917157 | orchestrator | 2026-03-28 00:54:15 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:54:15.918438 | orchestrator | 2026-03-28 00:54:15 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:54:15.918574 | orchestrator | 2026-03-28 00:54:15 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:54:18.964303 | orchestrator | 2026-03-28 00:54:18 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:54:18.966194 | orchestrator | 2026-03-28 00:54:18 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:54:18.968904 | orchestrator | 2026-03-28 00:54:18 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:54:18.968929 | orchestrator | 2026-03-28 00:54:18 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:54:22.021244 | orchestrator | 2026-03-28 00:54:22 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:54:22.022657 | orchestrator | 2026-03-28 00:54:22 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:54:22.024552 | orchestrator | 2026-03-28 00:54:22 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:54:22.025701 | orchestrator | 2026-03-28 00:54:22 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:54:25.094319 | orchestrator | 2026-03-28 00:54:25 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:54:25.095181 | orchestrator | 2026-03-28 00:54:25 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:54:25.096784 | orchestrator | 2026-03-28 00:54:25 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:54:25.096837 | orchestrator | 2026-03-28 00:54:25 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:54:28.146407 | orchestrator | 2026-03-28 00:54:28 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:54:28.150236 | orchestrator | 2026-03-28 00:54:28 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:54:28.154401 | orchestrator | 2026-03-28 00:54:28 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:54:28.154490 | orchestrator | 2026-03-28 00:54:28 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:54:31.193374 | orchestrator | 2026-03-28 00:54:31 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:54:31.194805 | orchestrator | 2026-03-28 00:54:31 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:54:31.196450 | orchestrator | 2026-03-28 00:54:31 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:54:31.196545 | orchestrator | 2026-03-28 00:54:31 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:54:34.281486 | orchestrator | 2026-03-28 00:54:34 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:54:34.281580 | orchestrator | 2026-03-28 00:54:34 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state STARTED 2026-03-28 00:54:34.281593 | orchestrator | 2026-03-28 00:54:34 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:54:34.281602 | orchestrator | 2026-03-28 00:54:34 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:54:37.264914 | orchestrator | 2026-03-28 00:54:37 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:54:37.266591 | orchestrator | 2026-03-28 00:54:37 | INFO  | Task 5d0cc978-b979-4f8d-a1ac-798061d04512 is in state SUCCESS 2026-03-28 00:54:37.268339 | orchestrator | 2026-03-28 00:54:37.268430 | orchestrator | 2026-03-28 00:54:37.268445 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 00:54:37.268457 | orchestrator | 2026-03-28 00:54:37.268469 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 00:54:37.268480 | orchestrator | Saturday 28 March 2026 00:49:47 +0000 (0:00:00.659) 0:00:00.659 ******** 2026-03-28 00:54:37.268491 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:54:37.268502 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:54:37.268513 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:54:37.268524 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:54:37.268534 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:54:37.268545 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:54:37.268555 | orchestrator | 2026-03-28 00:54:37.268566 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 00:54:37.268577 | orchestrator | Saturday 28 March 2026 00:49:48 +0000 (0:00:01.108) 0:00:01.767 ******** 2026-03-28 00:54:37.268588 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-03-28 00:54:37.268617 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-03-28 00:54:37.268628 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-03-28 00:54:37.268639 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-03-28 00:54:37.268650 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-03-28 00:54:37.268661 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-03-28 00:54:37.268672 | orchestrator | 2026-03-28 00:54:37.268682 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-03-28 00:54:37.268745 | orchestrator | 2026-03-28 00:54:37.268766 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-03-28 00:54:37.268785 | orchestrator | Saturday 28 March 2026 00:49:50 +0000 (0:00:02.092) 0:00:03.860 ******** 2026-03-28 00:54:37.268798 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:54:37.268810 | orchestrator | 2026-03-28 00:54:37.268820 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-03-28 00:54:37.268831 | orchestrator | Saturday 28 March 2026 00:49:52 +0000 (0:00:02.245) 0:00:06.105 ******** 2026-03-28 00:54:37.268845 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.268883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.268897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.268910 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.268923 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.268936 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.268949 | orchestrator | 2026-03-28 00:54:37.268982 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-03-28 00:54:37.268996 | orchestrator | Saturday 28 March 2026 00:49:56 +0000 (0:00:03.417) 0:00:09.522 ******** 2026-03-28 00:54:37.269009 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.269028 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.269042 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.269055 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.269075 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.269088 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.269101 | orchestrator | 2026-03-28 00:54:37.269114 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-03-28 00:54:37.269126 | orchestrator | Saturday 28 March 2026 00:49:59 +0000 (0:00:03.027) 0:00:12.550 ******** 2026-03-28 00:54:37.269139 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.269152 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.269175 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.269189 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.269208 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.269222 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.269241 | orchestrator | 2026-03-28 00:54:37.269254 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-03-28 00:54:37.269266 | orchestrator | Saturday 28 March 2026 00:50:03 +0000 (0:00:03.818) 0:00:16.368 ******** 2026-03-28 00:54:37.269277 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.269288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.269299 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.269310 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.269321 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.269333 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.269343 | orchestrator | 2026-03-28 00:54:37.269360 | orchestrator | TASK [service-check-containers : ovn_controller | Check containers] ************ 2026-03-28 00:54:37.269371 | orchestrator | Saturday 28 March 2026 00:50:06 +0000 (0:00:03.152) 0:00:19.521 ******** 2026-03-28 00:54:37.269382 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.269398 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.269417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.269428 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.269439 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.269450 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.269461 | orchestrator | 2026-03-28 00:54:37.269472 | orchestrator | TASK [service-check-containers : ovn_controller | Notify handlers to restart containers] *** 2026-03-28 00:54:37.269483 | orchestrator | Saturday 28 March 2026 00:50:08 +0000 (0:00:02.616) 0:00:22.138 ******** 2026-03-28 00:54:37.269493 | orchestrator | changed: [testbed-node-0] => { 2026-03-28 00:54:37.269504 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 00:54:37.269515 | orchestrator | } 2026-03-28 00:54:37.269526 | orchestrator | changed: [testbed-node-1] => { 2026-03-28 00:54:37.269537 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 00:54:37.269562 | orchestrator | } 2026-03-28 00:54:37.269573 | orchestrator | changed: [testbed-node-2] => { 2026-03-28 00:54:37.269584 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 00:54:37.269595 | orchestrator | } 2026-03-28 00:54:37.269606 | orchestrator | changed: [testbed-node-3] => { 2026-03-28 00:54:37.269617 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 00:54:37.269628 | orchestrator | } 2026-03-28 00:54:37.269639 | orchestrator | changed: [testbed-node-4] => { 2026-03-28 00:54:37.269650 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 00:54:37.269661 | orchestrator | } 2026-03-28 00:54:37.269672 | orchestrator | changed: [testbed-node-5] => { 2026-03-28 00:54:37.269683 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 00:54:37.269721 | orchestrator | } 2026-03-28 00:54:37.269735 | orchestrator | 2026-03-28 00:54:37.269747 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-28 00:54:37.269758 | orchestrator | Saturday 28 March 2026 00:50:10 +0000 (0:00:01.910) 0:00:24.048 ******** 2026-03-28 00:54:37.269770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:54:37.269788 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:54:37.269808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:54:37.269820 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:54:37.269838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:54:37.269849 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:54:37.269860 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:54:37.269873 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:54:37.269884 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:54:37.269894 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:54:37.269905 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:54:37.269916 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:54:37.269927 | orchestrator | 2026-03-28 00:54:37.269939 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-03-28 00:54:37.269951 | orchestrator | Saturday 28 March 2026 00:50:12 +0000 (0:00:01.983) 0:00:26.032 ******** 2026-03-28 00:54:37.269963 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:54:37.269974 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:54:37.269985 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:54:37.269996 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:54:37.270007 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:54:37.270086 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:54:37.270099 | orchestrator | 2026-03-28 00:54:37.270110 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-03-28 00:54:37.270121 | orchestrator | Saturday 28 March 2026 00:50:16 +0000 (0:00:03.417) 0:00:29.449 ******** 2026-03-28 00:54:37.270131 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-03-28 00:54:37.270151 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-03-28 00:54:37.270178 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-03-28 00:54:37.270201 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-03-28 00:54:37.270235 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-03-28 00:54:37.270256 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-28 00:54:37.270274 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-03-28 00:54:37.270295 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-28 00:54:37.270315 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-28 00:54:37.270336 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-28 00:54:37.270357 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-28 00:54:37.270378 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-03-28 00:54:37.270418 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-28 00:54:37.270446 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-03-28 00:54:37.270465 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-03-28 00:54:37.270484 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-03-28 00:54:37.270513 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-03-28 00:54:37.270534 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-28 00:54:37.270554 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-03-28 00:54:37.270571 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-28 00:54:37.270590 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-28 00:54:37.270611 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-28 00:54:37.270629 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-28 00:54:37.270650 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-28 00:54:37.270668 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-28 00:54:37.270688 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-28 00:54:37.270751 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-28 00:54:37.270770 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-28 00:54:37.270787 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-28 00:54:37.270803 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-28 00:54:37.270823 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-28 00:54:37.270842 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-28 00:54:37.270860 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-28 00:54:37.270893 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-28 00:54:37.270905 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-28 00:54:37.270917 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-28 00:54:37.270928 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-28 00:54:37.270939 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-28 00:54:37.270957 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-28 00:54:37.270976 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-28 00:54:37.270999 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-03-28 00:54:37.271029 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-28 00:54:37.271047 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-28 00:54:37.271065 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-03-28 00:54:37.271083 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-03-28 00:54:37.271100 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-03-28 00:54:37.271119 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-28 00:54:37.271165 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-03-28 00:54:37.271185 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-03-28 00:54:37.271204 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-28 00:54:37.271222 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-28 00:54:37.271241 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-28 00:54:37.271271 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-28 00:54:37.271292 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-28 00:54:37.271307 | orchestrator | 2026-03-28 00:54:37.271319 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-28 00:54:37.271329 | orchestrator | Saturday 28 March 2026 00:50:40 +0000 (0:00:24.175) 0:00:53.624 ******** 2026-03-28 00:54:37.271340 | orchestrator | 2026-03-28 00:54:37.271351 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-28 00:54:37.271362 | orchestrator | Saturday 28 March 2026 00:50:40 +0000 (0:00:00.258) 0:00:53.883 ******** 2026-03-28 00:54:37.271372 | orchestrator | 2026-03-28 00:54:37.271383 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-28 00:54:37.271394 | orchestrator | Saturday 28 March 2026 00:50:40 +0000 (0:00:00.063) 0:00:53.946 ******** 2026-03-28 00:54:37.271404 | orchestrator | 2026-03-28 00:54:37.271415 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-28 00:54:37.271443 | orchestrator | Saturday 28 March 2026 00:50:40 +0000 (0:00:00.077) 0:00:54.024 ******** 2026-03-28 00:54:37.271454 | orchestrator | 2026-03-28 00:54:37.271466 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-28 00:54:37.271476 | orchestrator | Saturday 28 March 2026 00:50:40 +0000 (0:00:00.076) 0:00:54.100 ******** 2026-03-28 00:54:37.271487 | orchestrator | 2026-03-28 00:54:37.271498 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-28 00:54:37.271509 | orchestrator | Saturday 28 March 2026 00:50:40 +0000 (0:00:00.066) 0:00:54.166 ******** 2026-03-28 00:54:37.271519 | orchestrator | 2026-03-28 00:54:37.271530 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-03-28 00:54:37.271541 | orchestrator | Saturday 28 March 2026 00:50:41 +0000 (0:00:00.073) 0:00:54.240 ******** 2026-03-28 00:54:37.271551 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:54:37.271563 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:54:37.271574 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:54:37.271584 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:54:37.271595 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:54:37.271605 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:54:37.271616 | orchestrator | 2026-03-28 00:54:37.271627 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-03-28 00:54:37.271638 | orchestrator | Saturday 28 March 2026 00:50:42 +0000 (0:00:01.626) 0:00:55.867 ******** 2026-03-28 00:54:37.271649 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:54:37.271660 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:54:37.271671 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:54:37.271682 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:54:37.271747 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:54:37.271760 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:54:37.271771 | orchestrator | 2026-03-28 00:54:37.271782 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-03-28 00:54:37.271793 | orchestrator | 2026-03-28 00:54:37.271803 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-28 00:54:37.271814 | orchestrator | Saturday 28 March 2026 00:50:51 +0000 (0:00:08.666) 0:01:04.533 ******** 2026-03-28 00:54:37.271826 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:54:37.271836 | orchestrator | 2026-03-28 00:54:37.271847 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-28 00:54:37.271858 | orchestrator | Saturday 28 March 2026 00:50:52 +0000 (0:00:00.777) 0:01:05.311 ******** 2026-03-28 00:54:37.271868 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:54:37.271879 | orchestrator | 2026-03-28 00:54:37.271890 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-03-28 00:54:37.271900 | orchestrator | Saturday 28 March 2026 00:50:52 +0000 (0:00:00.712) 0:01:06.023 ******** 2026-03-28 00:54:37.271911 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:54:37.271922 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:54:37.271933 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:54:37.271943 | orchestrator | 2026-03-28 00:54:37.271954 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-03-28 00:54:37.271964 | orchestrator | Saturday 28 March 2026 00:50:53 +0000 (0:00:01.047) 0:01:07.071 ******** 2026-03-28 00:54:37.271975 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:54:37.271986 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:54:37.271996 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:54:37.272007 | orchestrator | 2026-03-28 00:54:37.272017 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-03-28 00:54:37.272028 | orchestrator | Saturday 28 March 2026 00:50:54 +0000 (0:00:00.371) 0:01:07.443 ******** 2026-03-28 00:54:37.272039 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:54:37.272049 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:54:37.272059 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:54:37.272076 | orchestrator | 2026-03-28 00:54:37.272086 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-03-28 00:54:37.272105 | orchestrator | Saturday 28 March 2026 00:50:54 +0000 (0:00:00.548) 0:01:07.992 ******** 2026-03-28 00:54:37.272115 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:54:37.272124 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:54:37.272134 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:54:37.272143 | orchestrator | 2026-03-28 00:54:37.272153 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-03-28 00:54:37.272162 | orchestrator | Saturday 28 March 2026 00:50:55 +0000 (0:00:00.584) 0:01:08.576 ******** 2026-03-28 00:54:37.272172 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:54:37.272182 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:54:37.272191 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:54:37.272201 | orchestrator | 2026-03-28 00:54:37.272211 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-03-28 00:54:37.272222 | orchestrator | Saturday 28 March 2026 00:50:56 +0000 (0:00:00.947) 0:01:09.524 ******** 2026-03-28 00:54:37.272231 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:54:37.272241 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:54:37.272250 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:54:37.272260 | orchestrator | 2026-03-28 00:54:37.272269 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-03-28 00:54:37.272279 | orchestrator | Saturday 28 March 2026 00:50:56 +0000 (0:00:00.622) 0:01:10.147 ******** 2026-03-28 00:54:37.272289 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:54:37.272299 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:54:37.272308 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:54:37.272318 | orchestrator | 2026-03-28 00:54:37.272327 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-03-28 00:54:37.272338 | orchestrator | Saturday 28 March 2026 00:50:57 +0000 (0:00:00.564) 0:01:10.712 ******** 2026-03-28 00:54:37.272348 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:54:37.272357 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:54:37.272367 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:54:37.272377 | orchestrator | 2026-03-28 00:54:37.272387 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-03-28 00:54:37.272396 | orchestrator | Saturday 28 March 2026 00:50:57 +0000 (0:00:00.433) 0:01:11.146 ******** 2026-03-28 00:54:37.272406 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:54:37.272416 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:54:37.272426 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:54:37.272435 | orchestrator | 2026-03-28 00:54:37.272445 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-03-28 00:54:37.272455 | orchestrator | Saturday 28 March 2026 00:50:58 +0000 (0:00:00.662) 0:01:11.808 ******** 2026-03-28 00:54:37.272464 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:54:37.272474 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:54:37.272484 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:54:37.272493 | orchestrator | 2026-03-28 00:54:37.272503 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-03-28 00:54:37.272513 | orchestrator | Saturday 28 March 2026 00:50:58 +0000 (0:00:00.389) 0:01:12.197 ******** 2026-03-28 00:54:37.272523 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:54:37.272534 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:54:37.272545 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:54:37.272554 | orchestrator | 2026-03-28 00:54:37.272564 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-03-28 00:54:37.272574 | orchestrator | Saturday 28 March 2026 00:50:59 +0000 (0:00:00.335) 0:01:12.533 ******** 2026-03-28 00:54:37.272583 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:54:37.272593 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:54:37.272603 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:54:37.272613 | orchestrator | 2026-03-28 00:54:37.272622 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-03-28 00:54:37.272639 | orchestrator | Saturday 28 March 2026 00:50:59 +0000 (0:00:00.284) 0:01:12.817 ******** 2026-03-28 00:54:37.272649 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:54:37.272659 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:54:37.272669 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:54:37.272678 | orchestrator | 2026-03-28 00:54:37.272688 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-03-28 00:54:37.272714 | orchestrator | Saturday 28 March 2026 00:50:59 +0000 (0:00:00.379) 0:01:13.196 ******** 2026-03-28 00:54:37.272724 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:54:37.272734 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:54:37.272743 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:54:37.272753 | orchestrator | 2026-03-28 00:54:37.272763 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-03-28 00:54:37.272772 | orchestrator | Saturday 28 March 2026 00:51:00 +0000 (0:00:00.751) 0:01:13.948 ******** 2026-03-28 00:54:37.272782 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:54:37.272791 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:54:37.272801 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:54:37.272810 | orchestrator | 2026-03-28 00:54:37.272820 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-03-28 00:54:37.272829 | orchestrator | Saturday 28 March 2026 00:51:01 +0000 (0:00:00.377) 0:01:14.325 ******** 2026-03-28 00:54:37.272839 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:54:37.272849 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:54:37.272858 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:54:37.272868 | orchestrator | 2026-03-28 00:54:37.272877 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-03-28 00:54:37.272887 | orchestrator | Saturday 28 March 2026 00:51:01 +0000 (0:00:00.349) 0:01:14.675 ******** 2026-03-28 00:54:37.272896 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:54:37.272906 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:54:37.272916 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:54:37.272926 | orchestrator | 2026-03-28 00:54:37.272936 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-28 00:54:37.272945 | orchestrator | Saturday 28 March 2026 00:51:01 +0000 (0:00:00.345) 0:01:15.021 ******** 2026-03-28 00:54:37.272955 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:54:37.272965 | orchestrator | 2026-03-28 00:54:37.272982 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-03-28 00:54:37.272993 | orchestrator | Saturday 28 March 2026 00:51:02 +0000 (0:00:01.139) 0:01:16.160 ******** 2026-03-28 00:54:37.273002 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:54:37.273012 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:54:37.273022 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:54:37.273031 | orchestrator | 2026-03-28 00:54:37.273041 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-03-28 00:54:37.273050 | orchestrator | Saturday 28 March 2026 00:51:03 +0000 (0:00:00.688) 0:01:16.849 ******** 2026-03-28 00:54:37.273138 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:54:37.273167 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:54:37.273178 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:54:37.273187 | orchestrator | 2026-03-28 00:54:37.273198 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-03-28 00:54:37.273208 | orchestrator | Saturday 28 March 2026 00:51:04 +0000 (0:00:00.488) 0:01:17.338 ******** 2026-03-28 00:54:37.273218 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:54:37.273233 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:54:37.273243 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:54:37.273253 | orchestrator | 2026-03-28 00:54:37.273262 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-03-28 00:54:37.273271 | orchestrator | Saturday 28 March 2026 00:51:04 +0000 (0:00:00.681) 0:01:18.019 ******** 2026-03-28 00:54:37.273288 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:54:37.273298 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:54:37.273307 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:54:37.273317 | orchestrator | 2026-03-28 00:54:37.273326 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-03-28 00:54:37.273336 | orchestrator | Saturday 28 March 2026 00:51:05 +0000 (0:00:00.439) 0:01:18.458 ******** 2026-03-28 00:54:37.273345 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:54:37.273355 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:54:37.273364 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:54:37.273374 | orchestrator | 2026-03-28 00:54:37.273384 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-03-28 00:54:37.273393 | orchestrator | Saturday 28 March 2026 00:51:05 +0000 (0:00:00.456) 0:01:18.915 ******** 2026-03-28 00:54:37.273403 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:54:37.273412 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:54:37.273422 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:54:37.273431 | orchestrator | 2026-03-28 00:54:37.273440 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-03-28 00:54:37.273450 | orchestrator | Saturday 28 March 2026 00:51:06 +0000 (0:00:00.425) 0:01:19.340 ******** 2026-03-28 00:54:37.273459 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:54:37.273469 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:54:37.273478 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:54:37.273487 | orchestrator | 2026-03-28 00:54:37.273497 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-03-28 00:54:37.273507 | orchestrator | Saturday 28 March 2026 00:51:06 +0000 (0:00:00.587) 0:01:19.928 ******** 2026-03-28 00:54:37.273516 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:54:37.273525 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:54:37.273534 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:54:37.273544 | orchestrator | 2026-03-28 00:54:37.273554 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-28 00:54:37.273563 | orchestrator | Saturday 28 March 2026 00:51:07 +0000 (0:00:00.356) 0:01:20.285 ******** 2026-03-28 00:54:37.273576 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.273590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.273600 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.273619 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.273642 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.273653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.273663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.273674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:54:37.273686 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.273713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:54:37.273723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.273748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:54:37.273759 | orchestrator | 2026-03-28 00:54:37.273769 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-28 00:54:37.273779 | orchestrator | Saturday 28 March 2026 00:51:09 +0000 (0:00:02.833) 0:01:23.119 ******** 2026-03-28 00:54:37.273794 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.273805 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.273815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.273825 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.273836 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.273846 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.273863 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.273880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:54:37.273896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.273906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:54:37.273916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.273926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:54:37.273936 | orchestrator | 2026-03-28 00:54:37.273946 | orchestrator | TASK [ovn-db : Ensure configuration for relays exists] ************************* 2026-03-28 00:54:37.273956 | orchestrator | Saturday 28 March 2026 00:51:15 +0000 (0:00:05.916) 0:01:29.035 ******** 2026-03-28 00:54:37.273966 | orchestrator | included: /ansible/roles/ovn-db/tasks/config-relay.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=1) 2026-03-28 00:54:37.273976 | orchestrator | 2026-03-28 00:54:37.273986 | orchestrator | TASK [ovn-db : Ensuring config directories exist for OVN relay containers] ***** 2026-03-28 00:54:37.273995 | orchestrator | Saturday 28 March 2026 00:51:16 +0000 (0:00:00.636) 0:01:29.671 ******** 2026-03-28 00:54:37.274005 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:54:37.274015 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:54:37.274100 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:54:37.274110 | orchestrator | 2026-03-28 00:54:37.274119 | orchestrator | TASK [ovn-db : Copying over config.json files for OVN relay services] ********** 2026-03-28 00:54:37.274137 | orchestrator | Saturday 28 March 2026 00:51:17 +0000 (0:00:00.678) 0:01:30.349 ******** 2026-03-28 00:54:37.274147 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:54:37.274156 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:54:37.274165 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:54:37.274175 | orchestrator | 2026-03-28 00:54:37.274184 | orchestrator | TASK [ovn-db : Generate config files for OVN relay services] ******************* 2026-03-28 00:54:37.274194 | orchestrator | Saturday 28 March 2026 00:51:18 +0000 (0:00:01.621) 0:01:31.971 ******** 2026-03-28 00:54:37.274203 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:54:37.274212 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:54:37.274222 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:54:37.274231 | orchestrator | 2026-03-28 00:54:37.274240 | orchestrator | TASK [service-check-containers : ovn_db | Check containers] ******************** 2026-03-28 00:54:37.274250 | orchestrator | Saturday 28 March 2026 00:51:21 +0000 (0:00:02.294) 0:01:34.265 ******** 2026-03-28 00:54:37.274279 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.274290 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.274305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.274316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.274326 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.274336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.274354 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.274363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.274381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:54:37.274391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:54:37.274407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.274417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:54:37.274427 | orchestrator | 2026-03-28 00:54:37.274437 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-03-28 00:54:37.274447 | orchestrator | Saturday 28 March 2026 00:51:25 +0000 (0:00:04.213) 0:01:38.478 ******** 2026-03-28 00:54:37.274457 | orchestrator | changed: [testbed-node-0] => { 2026-03-28 00:54:37.274466 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 00:54:37.274476 | orchestrator | } 2026-03-28 00:54:37.274486 | orchestrator | changed: [testbed-node-1] => { 2026-03-28 00:54:37.274496 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 00:54:37.274506 | orchestrator | } 2026-03-28 00:54:37.274515 | orchestrator | changed: [testbed-node-2] => { 2026-03-28 00:54:37.274533 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 00:54:37.274543 | orchestrator | } 2026-03-28 00:54:37.274552 | orchestrator | 2026-03-28 00:54:37.274562 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-28 00:54:37.274571 | orchestrator | Saturday 28 March 2026 00:51:25 +0000 (0:00:00.714) 0:01:39.193 ******** 2026-03-28 00:54:37.274582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:54:37.274592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:54:37.274602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:54:37.274618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:54:37.274636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:54:37.274647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:54:37.274657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:54:37.274673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:54:37.274683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:54:37.274740 | orchestrator | included: /ansible/roles/service-check-containers/tasks/iterated.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.274752 | orchestrator | 2026-03-28 00:54:37.274762 | orchestrator | TASK [service-check-containers : ovn_db | Check containers with iteration] ***** 2026-03-28 00:54:37.274772 | orchestrator | Saturday 28 March 2026 00:51:28 +0000 (0:00:02.910) 0:01:42.104 ******** 2026-03-28 00:54:37.274781 | orchestrator | changed: [testbed-node-1] => (item=1) 2026-03-28 00:54:37.274791 | orchestrator | changed: [testbed-node-2] => (item=1) 2026-03-28 00:54:37.274801 | orchestrator | changed: [testbed-node-0] => (item=1) 2026-03-28 00:54:37.274810 | orchestrator | 2026-03-28 00:54:37.274820 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-03-28 00:54:37.274830 | orchestrator | Saturday 28 March 2026 00:52:00 +0000 (0:00:31.434) 0:02:13.538 ******** 2026-03-28 00:54:37.274840 | orchestrator | changed: [testbed-node-0] => { 2026-03-28 00:54:37.274849 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 00:54:37.274858 | orchestrator | } 2026-03-28 00:54:37.274868 | orchestrator | changed: [testbed-node-1] => { 2026-03-28 00:54:37.274878 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 00:54:37.274887 | orchestrator | } 2026-03-28 00:54:37.274897 | orchestrator | changed: [testbed-node-2] => { 2026-03-28 00:54:37.274906 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 00:54:37.274916 | orchestrator | } 2026-03-28 00:54:37.274926 | orchestrator | 2026-03-28 00:54:37.274947 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-28 00:54:37.274958 | orchestrator | Saturday 28 March 2026 00:52:01 +0000 (0:00:00.806) 0:02:14.345 ******** 2026-03-28 00:54:37.274967 | orchestrator | 2026-03-28 00:54:37.274977 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-28 00:54:37.274987 | orchestrator | Saturday 28 March 2026 00:52:01 +0000 (0:00:00.088) 0:02:14.434 ******** 2026-03-28 00:54:37.274996 | orchestrator | 2026-03-28 00:54:37.275006 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-28 00:54:37.275015 | orchestrator | Saturday 28 March 2026 00:52:01 +0000 (0:00:00.072) 0:02:14.507 ******** 2026-03-28 00:54:37.275025 | orchestrator | 2026-03-28 00:54:37.275034 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-28 00:54:37.275044 | orchestrator | Saturday 28 March 2026 00:52:01 +0000 (0:00:00.072) 0:02:14.579 ******** 2026-03-28 00:54:37.275065 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:54:37.275075 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:54:37.275085 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:54:37.275094 | orchestrator | 2026-03-28 00:54:37.275104 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-28 00:54:37.275113 | orchestrator | Saturday 28 March 2026 00:52:17 +0000 (0:00:15.784) 0:02:30.364 ******** 2026-03-28 00:54:37.275123 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:54:37.275133 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:54:37.275142 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:54:37.275152 | orchestrator | 2026-03-28 00:54:37.275161 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db-relay container] ******************* 2026-03-28 00:54:37.275171 | orchestrator | Saturday 28 March 2026 00:52:32 +0000 (0:00:15.592) 0:02:45.957 ******** 2026-03-28 00:54:37.275180 | orchestrator | changed: [testbed-node-1] => (item=1) 2026-03-28 00:54:37.275190 | orchestrator | changed: [testbed-node-0] => (item=1) 2026-03-28 00:54:37.275200 | orchestrator | changed: [testbed-node-2] => (item=1) 2026-03-28 00:54:37.275209 | orchestrator | 2026-03-28 00:54:37.275219 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-28 00:54:37.275228 | orchestrator | Saturday 28 March 2026 00:52:43 +0000 (0:00:10.457) 0:02:56.414 ******** 2026-03-28 00:54:37.275238 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:54:37.275247 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:54:37.275257 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:54:37.275266 | orchestrator | 2026-03-28 00:54:37.275274 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-28 00:54:37.275282 | orchestrator | Saturday 28 March 2026 00:53:01 +0000 (0:00:18.069) 0:03:14.483 ******** 2026-03-28 00:54:37.275289 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:54:37.275297 | orchestrator | 2026-03-28 00:54:37.275305 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-28 00:54:37.275313 | orchestrator | Saturday 28 March 2026 00:53:01 +0000 (0:00:00.151) 0:03:14.634 ******** 2026-03-28 00:54:37.275321 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:54:37.275328 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:54:37.275336 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:54:37.275344 | orchestrator | 2026-03-28 00:54:37.275351 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-28 00:54:37.275360 | orchestrator | Saturday 28 March 2026 00:53:02 +0000 (0:00:01.104) 0:03:15.738 ******** 2026-03-28 00:54:37.275367 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:54:37.275375 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:54:37.275383 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:54:37.275390 | orchestrator | 2026-03-28 00:54:37.275398 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-28 00:54:37.275406 | orchestrator | Saturday 28 March 2026 00:53:03 +0000 (0:00:00.665) 0:03:16.404 ******** 2026-03-28 00:54:37.275414 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:54:37.275421 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:54:37.275429 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:54:37.275437 | orchestrator | 2026-03-28 00:54:37.275445 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-28 00:54:37.275452 | orchestrator | Saturday 28 March 2026 00:53:04 +0000 (0:00:00.809) 0:03:17.214 ******** 2026-03-28 00:54:37.275460 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:54:37.275468 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:54:37.275476 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:54:37.275483 | orchestrator | 2026-03-28 00:54:37.275491 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-28 00:54:37.275499 | orchestrator | Saturday 28 March 2026 00:53:04 +0000 (0:00:00.636) 0:03:17.850 ******** 2026-03-28 00:54:37.275506 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:54:37.275514 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:54:37.275522 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:54:37.275536 | orchestrator | 2026-03-28 00:54:37.275543 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-28 00:54:37.275551 | orchestrator | Saturday 28 March 2026 00:53:05 +0000 (0:00:01.198) 0:03:19.049 ******** 2026-03-28 00:54:37.275559 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:54:37.275567 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:54:37.275574 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:54:37.275582 | orchestrator | 2026-03-28 00:54:37.275590 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db-relay] *************************************** 2026-03-28 00:54:37.275597 | orchestrator | Saturday 28 March 2026 00:53:06 +0000 (0:00:00.979) 0:03:20.029 ******** 2026-03-28 00:54:37.275605 | orchestrator | ok: [testbed-node-0] => (item=1) 2026-03-28 00:54:37.275613 | orchestrator | ok: [testbed-node-1] => (item=1) 2026-03-28 00:54:37.275621 | orchestrator | ok: [testbed-node-2] => (item=1) 2026-03-28 00:54:37.275629 | orchestrator | 2026-03-28 00:54:37.275637 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-03-28 00:54:37.275645 | orchestrator | Saturday 28 March 2026 00:53:07 +0000 (0:00:01.075) 0:03:21.105 ******** 2026-03-28 00:54:37.275653 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:54:37.275660 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:54:37.275668 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:54:37.275676 | orchestrator | 2026-03-28 00:54:37.275684 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-28 00:54:37.275713 | orchestrator | Saturday 28 March 2026 00:53:08 +0000 (0:00:00.326) 0:03:21.431 ******** 2026-03-28 00:54:37.275722 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.275736 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.275744 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.275753 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.275761 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.275776 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.275784 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.275799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:54:37.275808 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.275820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:54:37.275828 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.275836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:54:37.275844 | orchestrator | 2026-03-28 00:54:37.275852 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-28 00:54:37.275860 | orchestrator | Saturday 28 March 2026 00:53:11 +0000 (0:00:03.578) 0:03:25.010 ******** 2026-03-28 00:54:37.275873 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.275881 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.275889 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.275904 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.275916 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.275925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.275933 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.275941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:54:37.275957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.275966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:54:37.275974 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.275987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:54:37.275995 | orchestrator | 2026-03-28 00:54:37.276003 | orchestrator | TASK [ovn-db : Ensure configuration for relays exists] ************************* 2026-03-28 00:54:37.276011 | orchestrator | Saturday 28 March 2026 00:53:17 +0000 (0:00:05.243) 0:03:30.253 ******** 2026-03-28 00:54:37.276018 | orchestrator | included: /ansible/roles/ovn-db/tasks/config-relay.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=1) 2026-03-28 00:54:37.276027 | orchestrator | 2026-03-28 00:54:37.276035 | orchestrator | TASK [ovn-db : Ensuring config directories exist for OVN relay containers] ***** 2026-03-28 00:54:37.276042 | orchestrator | Saturday 28 March 2026 00:53:17 +0000 (0:00:00.645) 0:03:30.898 ******** 2026-03-28 00:54:37.276050 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:54:37.276058 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:54:37.276066 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:54:37.276073 | orchestrator | 2026-03-28 00:54:37.276081 | orchestrator | TASK [ovn-db : Copying over config.json files for OVN relay services] ********** 2026-03-28 00:54:37.276094 | orchestrator | Saturday 28 March 2026 00:53:18 +0000 (0:00:00.625) 0:03:31.524 ******** 2026-03-28 00:54:37.276102 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:54:37.276110 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:54:37.276118 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:54:37.276125 | orchestrator | 2026-03-28 00:54:37.276133 | orchestrator | TASK [ovn-db : Generate config files for OVN relay services] ******************* 2026-03-28 00:54:37.276141 | orchestrator | Saturday 28 March 2026 00:53:20 +0000 (0:00:02.026) 0:03:33.550 ******** 2026-03-28 00:54:37.276149 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:54:37.276157 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:54:37.276164 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:54:37.276179 | orchestrator | 2026-03-28 00:54:37.276186 | orchestrator | TASK [service-check-containers : ovn_db | Check containers] ******************** 2026-03-28 00:54:37.276194 | orchestrator | Saturday 28 March 2026 00:53:22 +0000 (0:00:01.761) 0:03:35.311 ******** 2026-03-28 00:54:37.276202 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.276211 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.276219 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.276228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.276236 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.276251 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.276263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.276276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:54:37.276285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.276293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:54:37.276301 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.276309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:54:37.276317 | orchestrator | 2026-03-28 00:54:37.276325 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-03-28 00:54:37.276333 | orchestrator | Saturday 28 March 2026 00:53:27 +0000 (0:00:05.660) 0:03:40.971 ******** 2026-03-28 00:54:37.276341 | orchestrator | ok: [testbed-node-0] => { 2026-03-28 00:54:37.276349 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 00:54:37.276357 | orchestrator | } 2026-03-28 00:54:37.276364 | orchestrator | changed: [testbed-node-1] => { 2026-03-28 00:54:37.276372 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 00:54:37.276380 | orchestrator | } 2026-03-28 00:54:37.276387 | orchestrator | changed: [testbed-node-2] => { 2026-03-28 00:54:37.276395 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 00:54:37.276403 | orchestrator | } 2026-03-28 00:54:37.276411 | orchestrator | 2026-03-28 00:54:37.276418 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-28 00:54:37.276426 | orchestrator | Saturday 28 March 2026 00:53:28 +0000 (0:00:00.470) 0:03:41.441 ******** 2026-03-28 00:54:37.276440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:54:37.276461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:54:37.276469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:54:37.276478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:54:37.276486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:54:37.276494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:54:37.276503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:54:37.276511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:54:37.276523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:54:37.276541 | orchestrator | included: /ansible/roles/service-check-containers/tasks/iterated.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 00:54:37.276549 | orchestrator | 2026-03-28 00:54:37.276557 | orchestrator | TASK [service-check-containers : ovn_db | Check containers with iteration] ***** 2026-03-28 00:54:37.276565 | orchestrator | Saturday 28 March 2026 00:53:30 +0000 (0:00:02.089) 0:03:43.531 ******** 2026-03-28 00:54:37.276573 | orchestrator | ok: [testbed-node-0] => (item=1) 2026-03-28 00:54:37.276581 | orchestrator | ok: [testbed-node-2] => (item=1) 2026-03-28 00:54:37.276589 | orchestrator | ok: [testbed-node-1] => (item=1) 2026-03-28 00:54:37.276596 | orchestrator | 2026-03-28 00:54:37.276604 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-03-28 00:54:37.276612 | orchestrator | Saturday 28 March 2026 00:54:00 +0000 (0:00:30.479) 0:04:14.011 ******** 2026-03-28 00:54:37.276620 | orchestrator | ok: [testbed-node-0] => { 2026-03-28 00:54:37.276627 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 00:54:37.276635 | orchestrator | } 2026-03-28 00:54:37.276643 | orchestrator | ok: [testbed-node-1] => { 2026-03-28 00:54:37.276651 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 00:54:37.276658 | orchestrator | } 2026-03-28 00:54:37.276666 | orchestrator | ok: [testbed-node-2] => { 2026-03-28 00:54:37.276674 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 00:54:37.276682 | orchestrator | } 2026-03-28 00:54:37.276689 | orchestrator | 2026-03-28 00:54:37.276712 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-28 00:54:37.276720 | orchestrator | Saturday 28 March 2026 00:54:02 +0000 (0:00:01.265) 0:04:15.277 ******** 2026-03-28 00:54:37.276728 | orchestrator | 2026-03-28 00:54:37.276736 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-28 00:54:37.276744 | orchestrator | Saturday 28 March 2026 00:54:02 +0000 (0:00:00.072) 0:04:15.349 ******** 2026-03-28 00:54:37.276751 | orchestrator | 2026-03-28 00:54:37.276759 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-28 00:54:37.276767 | orchestrator | Saturday 28 March 2026 00:54:02 +0000 (0:00:00.068) 0:04:15.417 ******** 2026-03-28 00:54:37.276775 | orchestrator | 2026-03-28 00:54:37.276783 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-28 00:54:37.276791 | orchestrator | Saturday 28 March 2026 00:54:02 +0000 (0:00:00.062) 0:04:15.480 ******** 2026-03-28 00:54:37.276798 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:54:37.276807 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:54:37.276815 | orchestrator | 2026-03-28 00:54:37.276822 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-28 00:54:37.276831 | orchestrator | Saturday 28 March 2026 00:54:15 +0000 (0:00:13.483) 0:04:28.964 ******** 2026-03-28 00:54:37.276839 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:54:37.276847 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:54:37.276855 | orchestrator | 2026-03-28 00:54:37.276863 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-28 00:54:37.276871 | orchestrator | Saturday 28 March 2026 00:54:28 +0000 (0:00:12.467) 0:04:41.431 ******** 2026-03-28 00:54:37.276878 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:54:37.276886 | orchestrator | 2026-03-28 00:54:37.276900 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-28 00:54:37.276908 | orchestrator | Saturday 28 March 2026 00:54:28 +0000 (0:00:00.137) 0:04:41.569 ******** 2026-03-28 00:54:37.276916 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:54:37.276924 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:54:37.276932 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:54:37.276940 | orchestrator | 2026-03-28 00:54:37.276948 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-28 00:54:37.276956 | orchestrator | Saturday 28 March 2026 00:54:29 +0000 (0:00:00.843) 0:04:42.412 ******** 2026-03-28 00:54:37.276964 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:54:37.276972 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:54:37.276981 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:54:37.276989 | orchestrator | 2026-03-28 00:54:37.276996 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-28 00:54:37.277004 | orchestrator | Saturday 28 March 2026 00:54:29 +0000 (0:00:00.684) 0:04:43.097 ******** 2026-03-28 00:54:37.277027 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:54:37.277035 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:54:37.277043 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:54:37.277051 | orchestrator | 2026-03-28 00:54:37.277059 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-28 00:54:37.277066 | orchestrator | Saturday 28 March 2026 00:54:30 +0000 (0:00:00.798) 0:04:43.895 ******** 2026-03-28 00:54:37.277074 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:54:37.277082 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:54:37.277089 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:54:37.277097 | orchestrator | 2026-03-28 00:54:37.277105 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-28 00:54:37.277113 | orchestrator | Saturday 28 March 2026 00:54:31 +0000 (0:00:00.650) 0:04:44.545 ******** 2026-03-28 00:54:37.277121 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:54:37.277135 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:54:37.277143 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:54:37.277150 | orchestrator | 2026-03-28 00:54:37.277158 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-28 00:54:37.277166 | orchestrator | Saturday 28 March 2026 00:54:32 +0000 (0:00:00.752) 0:04:45.298 ******** 2026-03-28 00:54:37.277174 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:54:37.277181 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:54:37.277189 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:54:37.277197 | orchestrator | 2026-03-28 00:54:37.277205 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db-relay] *************************************** 2026-03-28 00:54:37.277212 | orchestrator | Saturday 28 March 2026 00:54:33 +0000 (0:00:01.073) 0:04:46.371 ******** 2026-03-28 00:54:37.277220 | orchestrator | ok: [testbed-node-0] => (item=1) 2026-03-28 00:54:37.277228 | orchestrator | ok: [testbed-node-1] => (item=1) 2026-03-28 00:54:37.277236 | orchestrator | ok: [testbed-node-2] => (item=1) 2026-03-28 00:54:37.277244 | orchestrator | 2026-03-28 00:54:37.277256 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:54:37.277265 | orchestrator | testbed-node-0 : ok=64  changed=26  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-03-28 00:54:37.277273 | orchestrator | testbed-node-1 : ok=62  changed=27  unreachable=0 failed=0 skipped=23  rescued=0 ignored=0 2026-03-28 00:54:37.277281 | orchestrator | testbed-node-2 : ok=62  changed=27  unreachable=0 failed=0 skipped=23  rescued=0 ignored=0 2026-03-28 00:54:37.277289 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 00:54:37.277297 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 00:54:37.277310 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 00:54:37.277318 | orchestrator | 2026-03-28 00:54:37.277326 | orchestrator | 2026-03-28 00:54:37.277333 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:54:37.277341 | orchestrator | Saturday 28 March 2026 00:54:34 +0000 (0:00:01.359) 0:04:47.731 ******** 2026-03-28 00:54:37.277349 | orchestrator | =============================================================================== 2026-03-28 00:54:37.277356 | orchestrator | service-check-containers : ovn_db | Check containers with iteration ---- 31.43s 2026-03-28 00:54:37.277364 | orchestrator | service-check-containers : ovn_db | Check containers with iteration ---- 30.48s 2026-03-28 00:54:37.277372 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 29.27s 2026-03-28 00:54:37.277380 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 28.06s 2026-03-28 00:54:37.277387 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 24.17s 2026-03-28 00:54:37.277395 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 18.07s 2026-03-28 00:54:37.277403 | orchestrator | ovn-db : Restart ovn-sb-db-relay container ----------------------------- 10.46s 2026-03-28 00:54:37.277410 | orchestrator | ovn-controller : Restart ovn-controller container ----------------------- 8.67s 2026-03-28 00:54:37.277418 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 5.92s 2026-03-28 00:54:37.277425 | orchestrator | service-check-containers : ovn_db | Check containers -------------------- 5.66s 2026-03-28 00:54:37.277433 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 5.24s 2026-03-28 00:54:37.277441 | orchestrator | service-check-containers : ovn_db | Check containers -------------------- 4.21s 2026-03-28 00:54:37.277449 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 3.82s 2026-03-28 00:54:37.277456 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 3.58s 2026-03-28 00:54:37.277464 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.42s 2026-03-28 00:54:37.277472 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 3.42s 2026-03-28 00:54:37.277479 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 3.15s 2026-03-28 00:54:37.277487 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 3.03s 2026-03-28 00:54:37.277495 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.91s 2026-03-28 00:54:37.277502 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 2.83s 2026-03-28 00:54:37.277510 | orchestrator | 2026-03-28 00:54:37 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:54:37.277518 | orchestrator | 2026-03-28 00:54:37 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:54:40.303448 | orchestrator | 2026-03-28 00:54:40 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:54:40.305219 | orchestrator | 2026-03-28 00:54:40 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:54:40.305255 | orchestrator | 2026-03-28 00:54:40 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:54:43.347590 | orchestrator | 2026-03-28 00:54:43 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:54:43.348375 | orchestrator | 2026-03-28 00:54:43 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:54:43.348412 | orchestrator | 2026-03-28 00:54:43 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:54:46.416568 | orchestrator | 2026-03-28 00:54:46 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:54:46.416745 | orchestrator | 2026-03-28 00:54:46 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:54:46.416764 | orchestrator | 2026-03-28 00:54:46 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:54:49.438327 | orchestrator | 2026-03-28 00:54:49 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:54:49.439999 | orchestrator | 2026-03-28 00:54:49 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:54:49.440099 | orchestrator | 2026-03-28 00:54:49 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:54:52.479177 | orchestrator | 2026-03-28 00:54:52 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:54:52.481842 | orchestrator | 2026-03-28 00:54:52 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:54:52.482090 | orchestrator | 2026-03-28 00:54:52 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:54:55.532189 | orchestrator | 2026-03-28 00:54:55 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:54:55.535078 | orchestrator | 2026-03-28 00:54:55 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:54:55.535159 | orchestrator | 2026-03-28 00:54:55 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:54:58.572877 | orchestrator | 2026-03-28 00:54:58 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:54:58.573089 | orchestrator | 2026-03-28 00:54:58 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:54:58.573109 | orchestrator | 2026-03-28 00:54:58 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:55:01.621079 | orchestrator | 2026-03-28 00:55:01 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:55:01.622118 | orchestrator | 2026-03-28 00:55:01 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:55:01.622328 | orchestrator | 2026-03-28 00:55:01 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:55:04.658164 | orchestrator | 2026-03-28 00:55:04 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:55:04.658463 | orchestrator | 2026-03-28 00:55:04 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:55:04.659059 | orchestrator | 2026-03-28 00:55:04 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:55:07.693019 | orchestrator | 2026-03-28 00:55:07 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:55:07.694471 | orchestrator | 2026-03-28 00:55:07 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:55:07.694603 | orchestrator | 2026-03-28 00:55:07 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:55:10.730509 | orchestrator | 2026-03-28 00:55:10 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:55:10.732681 | orchestrator | 2026-03-28 00:55:10 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:55:10.732734 | orchestrator | 2026-03-28 00:55:10 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:55:13.775181 | orchestrator | 2026-03-28 00:55:13 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:55:13.777271 | orchestrator | 2026-03-28 00:55:13 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:55:13.777336 | orchestrator | 2026-03-28 00:55:13 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:55:16.817111 | orchestrator | 2026-03-28 00:55:16 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:55:16.818339 | orchestrator | 2026-03-28 00:55:16 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:55:16.818379 | orchestrator | 2026-03-28 00:55:16 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:55:19.868153 | orchestrator | 2026-03-28 00:55:19 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:55:19.872564 | orchestrator | 2026-03-28 00:55:19 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:55:19.872704 | orchestrator | 2026-03-28 00:55:19 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:55:22.923006 | orchestrator | 2026-03-28 00:55:22 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:55:22.924177 | orchestrator | 2026-03-28 00:55:22 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:55:22.924501 | orchestrator | 2026-03-28 00:55:22 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:55:25.964354 | orchestrator | 2026-03-28 00:55:25 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:55:25.966116 | orchestrator | 2026-03-28 00:55:25 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:55:25.966253 | orchestrator | 2026-03-28 00:55:25 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:55:29.004878 | orchestrator | 2026-03-28 00:55:29 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:55:29.006592 | orchestrator | 2026-03-28 00:55:29 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:55:29.006709 | orchestrator | 2026-03-28 00:55:29 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:55:32.044546 | orchestrator | 2026-03-28 00:55:32 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:55:32.046723 | orchestrator | 2026-03-28 00:55:32 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:55:32.046764 | orchestrator | 2026-03-28 00:55:32 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:55:35.102134 | orchestrator | 2026-03-28 00:55:35 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:55:35.102578 | orchestrator | 2026-03-28 00:55:35 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:55:35.102654 | orchestrator | 2026-03-28 00:55:35 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:55:38.147909 | orchestrator | 2026-03-28 00:55:38 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:55:38.150552 | orchestrator | 2026-03-28 00:55:38 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:55:38.150628 | orchestrator | 2026-03-28 00:55:38 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:55:41.202127 | orchestrator | 2026-03-28 00:55:41 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:55:41.204876 | orchestrator | 2026-03-28 00:55:41 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:55:41.204922 | orchestrator | 2026-03-28 00:55:41 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:55:44.248122 | orchestrator | 2026-03-28 00:55:44 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:55:44.250654 | orchestrator | 2026-03-28 00:55:44 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:55:44.250795 | orchestrator | 2026-03-28 00:55:44 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:55:47.296666 | orchestrator | 2026-03-28 00:55:47 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:55:47.298301 | orchestrator | 2026-03-28 00:55:47 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:55:47.298330 | orchestrator | 2026-03-28 00:55:47 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:55:50.339237 | orchestrator | 2026-03-28 00:55:50 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:55:50.339820 | orchestrator | 2026-03-28 00:55:50 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:55:50.339846 | orchestrator | 2026-03-28 00:55:50 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:55:53.380778 | orchestrator | 2026-03-28 00:55:53 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:55:53.382202 | orchestrator | 2026-03-28 00:55:53 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:55:53.382272 | orchestrator | 2026-03-28 00:55:53 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:55:56.435454 | orchestrator | 2026-03-28 00:55:56 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:55:56.436997 | orchestrator | 2026-03-28 00:55:56 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:55:56.437048 | orchestrator | 2026-03-28 00:55:56 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:55:59.474841 | orchestrator | 2026-03-28 00:55:59 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:55:59.477371 | orchestrator | 2026-03-28 00:55:59 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:55:59.477865 | orchestrator | 2026-03-28 00:55:59 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:56:02.530909 | orchestrator | 2026-03-28 00:56:02 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:56:02.531247 | orchestrator | 2026-03-28 00:56:02 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:56:02.531687 | orchestrator | 2026-03-28 00:56:02 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:56:05.584133 | orchestrator | 2026-03-28 00:56:05 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:56:05.585986 | orchestrator | 2026-03-28 00:56:05 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:56:05.586082 | orchestrator | 2026-03-28 00:56:05 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:56:08.634993 | orchestrator | 2026-03-28 00:56:08 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:56:08.636081 | orchestrator | 2026-03-28 00:56:08 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:56:08.636154 | orchestrator | 2026-03-28 00:56:08 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:56:11.690835 | orchestrator | 2026-03-28 00:56:11 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state STARTED 2026-03-28 00:56:11.695020 | orchestrator | 2026-03-28 00:56:11 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:56:11.695190 | orchestrator | 2026-03-28 00:56:11 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:56:14.737109 | orchestrator | 2026-03-28 00:56:14 | INFO  | Task e69e920c-198f-405e-b326-b9ac960ea778 is in state STARTED 2026-03-28 00:56:14.737246 | orchestrator | 2026-03-28 00:56:14 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:56:14.744296 | orchestrator | 2026-03-28 00:56:14 | INFO  | Task bb0f003f-8e88-4a10-8288-b8432ac4e824 is in state SUCCESS 2026-03-28 00:56:14.746071 | orchestrator | 2026-03-28 00:56:14.746121 | orchestrator | 2026-03-28 00:56:14.746138 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 00:56:14.746150 | orchestrator | 2026-03-28 00:56:14.746161 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 00:56:14.746173 | orchestrator | Saturday 28 March 2026 00:48:10 +0000 (0:00:00.495) 0:00:00.495 ******** 2026-03-28 00:56:14.746184 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:56:14.746196 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:56:14.746207 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:56:14.746218 | orchestrator | 2026-03-28 00:56:14.746229 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 00:56:14.746240 | orchestrator | Saturday 28 March 2026 00:48:10 +0000 (0:00:00.751) 0:00:01.246 ******** 2026-03-28 00:56:14.746251 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-03-28 00:56:14.746262 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-03-28 00:56:14.746278 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-03-28 00:56:14.746296 | orchestrator | 2026-03-28 00:56:14.746403 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-03-28 00:56:14.746426 | orchestrator | 2026-03-28 00:56:14.746437 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-28 00:56:14.746448 | orchestrator | Saturday 28 March 2026 00:48:11 +0000 (0:00:00.769) 0:00:02.016 ******** 2026-03-28 00:56:14.746459 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:56:14.746470 | orchestrator | 2026-03-28 00:56:14.746481 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-03-28 00:56:14.746491 | orchestrator | Saturday 28 March 2026 00:48:12 +0000 (0:00:01.141) 0:00:03.158 ******** 2026-03-28 00:56:14.746601 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:56:14.746614 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:56:14.746624 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:56:14.746636 | orchestrator | 2026-03-28 00:56:14.746741 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-28 00:56:14.746765 | orchestrator | Saturday 28 March 2026 00:48:14 +0000 (0:00:01.610) 0:00:04.768 ******** 2026-03-28 00:56:14.746779 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:56:14.746791 | orchestrator | 2026-03-28 00:56:14.746803 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-03-28 00:56:14.746816 | orchestrator | Saturday 28 March 2026 00:48:15 +0000 (0:00:01.011) 0:00:05.780 ******** 2026-03-28 00:56:14.746881 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:56:14.746895 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:56:14.746906 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:56:14.746917 | orchestrator | 2026-03-28 00:56:14.746956 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-03-28 00:56:14.746967 | orchestrator | Saturday 28 March 2026 00:48:17 +0000 (0:00:01.611) 0:00:07.392 ******** 2026-03-28 00:56:14.746978 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-28 00:56:14.746989 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-28 00:56:14.746999 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-28 00:56:14.747010 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-28 00:56:14.747021 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-28 00:56:14.747038 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-28 00:56:14.747071 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-28 00:56:14.747082 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-28 00:56:14.747093 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-28 00:56:14.747103 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-28 00:56:14.747114 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-28 00:56:14.747125 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-28 00:56:14.747135 | orchestrator | 2026-03-28 00:56:14.747146 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-28 00:56:14.747162 | orchestrator | Saturday 28 March 2026 00:48:22 +0000 (0:00:05.839) 0:00:13.231 ******** 2026-03-28 00:56:14.747181 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-28 00:56:14.747199 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-28 00:56:14.747215 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-28 00:56:14.747234 | orchestrator | 2026-03-28 00:56:14.747248 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-28 00:56:14.747258 | orchestrator | Saturday 28 March 2026 00:48:24 +0000 (0:00:01.603) 0:00:14.835 ******** 2026-03-28 00:56:14.747269 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-28 00:56:14.747280 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-28 00:56:14.747291 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-28 00:56:14.747302 | orchestrator | 2026-03-28 00:56:14.747313 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-28 00:56:14.747324 | orchestrator | Saturday 28 March 2026 00:48:26 +0000 (0:00:02.214) 0:00:17.050 ******** 2026-03-28 00:56:14.747335 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-03-28 00:56:14.747346 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.747374 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-03-28 00:56:14.747385 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.747396 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-03-28 00:56:14.747407 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.747418 | orchestrator | 2026-03-28 00:56:14.747429 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-03-28 00:56:14.747440 | orchestrator | Saturday 28 March 2026 00:48:27 +0000 (0:00:01.021) 0:00:18.072 ******** 2026-03-28 00:56:14.747454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-28 00:56:14.747471 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-28 00:56:14.747491 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-28 00:56:14.747509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 00:56:14.747540 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 00:56:14.747560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 00:56:14.747572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-28 00:56:14.747585 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-28 00:56:14.747596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-28 00:56:14.747642 | orchestrator | 2026-03-28 00:56:14.747656 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-03-28 00:56:14.747667 | orchestrator | Saturday 28 March 2026 00:48:32 +0000 (0:00:04.608) 0:00:22.681 ******** 2026-03-28 00:56:14.747678 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:56:14.747689 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:56:14.747699 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:56:14.747710 | orchestrator | 2026-03-28 00:56:14.747721 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-03-28 00:56:14.747731 | orchestrator | Saturday 28 March 2026 00:48:35 +0000 (0:00:03.469) 0:00:26.150 ******** 2026-03-28 00:56:14.747742 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-03-28 00:56:14.747753 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-03-28 00:56:14.747763 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-03-28 00:56:14.747774 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-03-28 00:56:14.747785 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-03-28 00:56:14.747795 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-03-28 00:56:14.747806 | orchestrator | 2026-03-28 00:56:14.747817 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-03-28 00:56:14.747828 | orchestrator | Saturday 28 March 2026 00:48:40 +0000 (0:00:04.302) 0:00:30.452 ******** 2026-03-28 00:56:14.747838 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:56:14.747849 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:56:14.747865 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:56:14.747876 | orchestrator | 2026-03-28 00:56:14.747887 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-03-28 00:56:14.747898 | orchestrator | Saturday 28 March 2026 00:48:42 +0000 (0:00:02.218) 0:00:32.671 ******** 2026-03-28 00:56:14.747909 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:56:14.747919 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:56:14.747930 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:56:14.747940 | orchestrator | 2026-03-28 00:56:14.747951 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-03-28 00:56:14.747962 | orchestrator | Saturday 28 March 2026 00:48:45 +0000 (0:00:03.499) 0:00:36.170 ******** 2026-03-28 00:56:14.747973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-28 00:56:14.747992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 00:56:14.748107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:56:14.748128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ccdfec29932722a1ec36a5a2b2afb1fed2073793', '__omit_place_holder__ccdfec29932722a1ec36a5a2b2afb1fed2073793'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-28 00:56:14.748185 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.748199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-28 00:56:14.748217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 00:56:14.748229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:56:14.748240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ccdfec29932722a1ec36a5a2b2afb1fed2073793', '__omit_place_holder__ccdfec29932722a1ec36a5a2b2afb1fed2073793'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-28 00:56:14.748251 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.748272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-28 00:56:14.748291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 00:56:14.748302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:56:14.748313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ccdfec29932722a1ec36a5a2b2afb1fed2073793', '__omit_place_holder__ccdfec29932722a1ec36a5a2b2afb1fed2073793'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-28 00:56:14.748324 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.748335 | orchestrator | 2026-03-28 00:56:14.748346 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-03-28 00:56:14.748357 | orchestrator | Saturday 28 March 2026 00:48:47 +0000 (0:00:01.584) 0:00:37.755 ******** 2026-03-28 00:56:14.748373 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-28 00:56:14.748385 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-28 00:56:14.748403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-28 00:56:14.748421 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 00:56:14.748433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:56:14.748444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ccdfec29932722a1ec36a5a2b2afb1fed2073793', '__omit_place_holder__ccdfec29932722a1ec36a5a2b2afb1fed2073793'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-28 00:56:14.748459 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 00:56:14.748471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:56:14.748482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ccdfec29932722a1ec36a5a2b2afb1fed2073793', '__omit_place_holder__ccdfec29932722a1ec36a5a2b2afb1fed2073793'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-28 00:56:14.748506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 00:56:14.748517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:56:14.748592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ccdfec29932722a1ec36a5a2b2afb1fed2073793', '__omit_place_holder__ccdfec29932722a1ec36a5a2b2afb1fed2073793'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-28 00:56:14.748604 | orchestrator | 2026-03-28 00:56:14.748619 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-03-28 00:56:14.748638 | orchestrator | Saturday 28 March 2026 00:48:52 +0000 (0:00:04.684) 0:00:42.439 ******** 2026-03-28 00:56:14.748658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-28 00:56:14.748690 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-28 00:56:14.748703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-28 00:56:14.748730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 00:56:14.748742 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 00:56:14.748753 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 00:56:14.748764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-28 00:56:14.748780 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-28 00:56:14.748791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-28 00:56:14.748802 | orchestrator | 2026-03-28 00:56:14.748813 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-03-28 00:56:14.748839 | orchestrator | Saturday 28 March 2026 00:48:57 +0000 (0:00:05.537) 0:00:47.976 ******** 2026-03-28 00:56:14.748858 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-28 00:56:14.748869 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-28 00:56:14.748880 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-28 00:56:14.748890 | orchestrator | 2026-03-28 00:56:14.748901 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-03-28 00:56:14.748912 | orchestrator | Saturday 28 March 2026 00:48:59 +0000 (0:00:01.969) 0:00:49.946 ******** 2026-03-28 00:56:14.748923 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-28 00:56:14.748934 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-28 00:56:14.748945 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-28 00:56:14.748956 | orchestrator | 2026-03-28 00:56:14.750237 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-03-28 00:56:14.750334 | orchestrator | Saturday 28 March 2026 00:49:05 +0000 (0:00:05.565) 0:00:55.512 ******** 2026-03-28 00:56:14.750352 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.750364 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.750375 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.750386 | orchestrator | 2026-03-28 00:56:14.750397 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-03-28 00:56:14.750408 | orchestrator | Saturday 28 March 2026 00:49:08 +0000 (0:00:03.370) 0:00:58.882 ******** 2026-03-28 00:56:14.750420 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-28 00:56:14.750432 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-28 00:56:14.750442 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-28 00:56:14.750453 | orchestrator | 2026-03-28 00:56:14.750464 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-03-28 00:56:14.750475 | orchestrator | Saturday 28 March 2026 00:49:11 +0000 (0:00:03.016) 0:01:01.899 ******** 2026-03-28 00:56:14.750486 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-28 00:56:14.750496 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-28 00:56:14.750507 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-28 00:56:14.750518 | orchestrator | 2026-03-28 00:56:14.750572 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-28 00:56:14.750584 | orchestrator | Saturday 28 March 2026 00:49:14 +0000 (0:00:02.525) 0:01:04.425 ******** 2026-03-28 00:56:14.750595 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:56:14.750605 | orchestrator | 2026-03-28 00:56:14.750616 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-03-28 00:56:14.750627 | orchestrator | Saturday 28 March 2026 00:49:15 +0000 (0:00:00.895) 0:01:05.320 ******** 2026-03-28 00:56:14.750638 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-03-28 00:56:14.750649 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-03-28 00:56:14.750660 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-03-28 00:56:14.750671 | orchestrator | 2026-03-28 00:56:14.750682 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-03-28 00:56:14.750693 | orchestrator | Saturday 28 March 2026 00:49:18 +0000 (0:00:03.049) 0:01:08.370 ******** 2026-03-28 00:56:14.750738 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-03-28 00:56:14.750750 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-03-28 00:56:14.750760 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-03-28 00:56:14.750771 | orchestrator | 2026-03-28 00:56:14.750783 | orchestrator | TASK [loadbalancer : Copying over proxysql-cert.pem] *************************** 2026-03-28 00:56:14.750794 | orchestrator | Saturday 28 March 2026 00:49:21 +0000 (0:00:03.032) 0:01:11.403 ******** 2026-03-28 00:56:14.750804 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.750815 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.750839 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.750851 | orchestrator | 2026-03-28 00:56:14.750862 | orchestrator | TASK [loadbalancer : Copying over proxysql-key.pem] **************************** 2026-03-28 00:56:14.750872 | orchestrator | Saturday 28 March 2026 00:49:21 +0000 (0:00:00.358) 0:01:11.761 ******** 2026-03-28 00:56:14.750883 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.750894 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.750904 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.750915 | orchestrator | 2026-03-28 00:56:14.750926 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-28 00:56:14.750937 | orchestrator | Saturday 28 March 2026 00:49:21 +0000 (0:00:00.439) 0:01:12.201 ******** 2026-03-28 00:56:14.750951 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-28 00:56:14.750997 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-28 00:56:14.751010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-28 00:56:14.751022 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 00:56:14.751041 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 00:56:14.751058 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 00:56:14.751070 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-28 00:56:14.751083 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-28 00:56:14.751104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-28 00:56:14.751116 | orchestrator | 2026-03-28 00:56:14.751127 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-28 00:56:14.751138 | orchestrator | Saturday 28 March 2026 00:49:25 +0000 (0:00:03.664) 0:01:15.866 ******** 2026-03-28 00:56:14.751149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-28 00:56:14.751161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 00:56:14.751179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:56:14.751191 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.751207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-28 00:56:14.751218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 00:56:14.751230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:56:14.751241 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.751259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-28 00:56:14.751271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 00:56:14.751289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:56:14.751300 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.751311 | orchestrator | 2026-03-28 00:56:14.751322 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-28 00:56:14.751333 | orchestrator | Saturday 28 March 2026 00:49:26 +0000 (0:00:00.892) 0:01:16.759 ******** 2026-03-28 00:56:14.751344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-28 00:56:14.751360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 00:56:14.751372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:56:14.751383 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.751401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-28 00:56:14.751413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 00:56:14.751430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:56:14.751441 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.751452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-28 00:56:14.751468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 00:56:14.751479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:56:14.751490 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.751501 | orchestrator | 2026-03-28 00:56:14.751512 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-03-28 00:56:14.751549 | orchestrator | Saturday 28 March 2026 00:49:27 +0000 (0:00:01.338) 0:01:18.097 ******** 2026-03-28 00:56:14.751560 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-28 00:56:14.751571 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-28 00:56:14.751582 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-28 00:56:14.751593 | orchestrator | 2026-03-28 00:56:14.751604 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-03-28 00:56:14.751614 | orchestrator | Saturday 28 March 2026 00:49:30 +0000 (0:00:02.163) 0:01:20.261 ******** 2026-03-28 00:56:14.751625 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-28 00:56:14.751642 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-28 00:56:14.751653 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-28 00:56:14.751671 | orchestrator | 2026-03-28 00:56:14.751682 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-03-28 00:56:14.751693 | orchestrator | Saturday 28 March 2026 00:49:32 +0000 (0:00:02.156) 0:01:22.418 ******** 2026-03-28 00:56:14.751703 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-28 00:56:14.751714 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-28 00:56:14.751724 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-28 00:56:14.751735 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-28 00:56:14.751746 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.751756 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-28 00:56:14.751767 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.751778 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-28 00:56:14.751789 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.751800 | orchestrator | 2026-03-28 00:56:14.751810 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-03-28 00:56:14.751821 | orchestrator | Saturday 28 March 2026 00:49:34 +0000 (0:00:02.272) 0:01:24.690 ******** 2026-03-28 00:56:14.751832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-28 00:56:14.751844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-28 00:56:14.751860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-28 00:56:14.751872 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 00:56:14.751897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 00:56:14.751909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 00:56:14.751920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-28 00:56:14.751931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-28 00:56:14.751942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-28 00:56:14.751953 | orchestrator | 2026-03-28 00:56:14.751964 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-03-28 00:56:14.751980 | orchestrator | Saturday 28 March 2026 00:49:37 +0000 (0:00:02.908) 0:01:27.598 ******** 2026-03-28 00:56:14.751991 | orchestrator | changed: [testbed-node-0] => { 2026-03-28 00:56:14.752005 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 00:56:14.752023 | orchestrator | } 2026-03-28 00:56:14.752042 | orchestrator | changed: [testbed-node-1] => { 2026-03-28 00:56:14.752059 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 00:56:14.752077 | orchestrator | } 2026-03-28 00:56:14.752095 | orchestrator | changed: [testbed-node-2] => { 2026-03-28 00:56:14.752113 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 00:56:14.752130 | orchestrator | } 2026-03-28 00:56:14.752147 | orchestrator | 2026-03-28 00:56:14.752165 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-28 00:56:14.752182 | orchestrator | Saturday 28 March 2026 00:49:37 +0000 (0:00:00.603) 0:01:28.202 ******** 2026-03-28 00:56:14.752208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-28 00:56:14.752230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 00:56:14.752242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:56:14.752253 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.752264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-28 00:56:14.752275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 00:56:14.752286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:56:14.752304 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.752315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-28 00:56:14.752333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 00:56:14.752351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:56:14.752363 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.752374 | orchestrator | 2026-03-28 00:56:14.752385 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-03-28 00:56:14.752396 | orchestrator | Saturday 28 March 2026 00:49:39 +0000 (0:00:01.168) 0:01:29.370 ******** 2026-03-28 00:56:14.752407 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:56:14.752418 | orchestrator | 2026-03-28 00:56:14.752428 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-03-28 00:56:14.752439 | orchestrator | Saturday 28 March 2026 00:49:40 +0000 (0:00:00.900) 0:01:30.271 ******** 2026-03-28 00:56:14.752452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 00:56:14.752467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-28 00:56:14.752485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.752505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.752548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 00:56:14.752569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-28 00:56:14.752589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.752609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.752635 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 00:56:14.752660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-28 00:56:14.752678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.752690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.752701 | orchestrator | 2026-03-28 00:56:14.752713 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-03-28 00:56:14.752723 | orchestrator | Saturday 28 March 2026 00:49:46 +0000 (0:00:06.764) 0:01:37.036 ******** 2026-03-28 00:56:14.752735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 00:56:14.752746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-28 00:56:14.752769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.752781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.752792 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.752810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 00:56:14.752821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-28 00:56:14.752832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.752844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.752861 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.752878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 00:56:14.752889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-28 00:56:14.752908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.752919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.752930 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.752941 | orchestrator | 2026-03-28 00:56:14.752952 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-03-28 00:56:14.752963 | orchestrator | Saturday 28 March 2026 00:49:48 +0000 (0:00:01.481) 0:01:38.517 ******** 2026-03-28 00:56:14.752974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-03-28 00:56:14.752986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-03-28 00:56:14.753004 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.753015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-03-28 00:56:14.753026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-03-28 00:56:14.753037 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.753048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-03-28 00:56:14.753059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-03-28 00:56:14.753070 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.753081 | orchestrator | 2026-03-28 00:56:14.753092 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-03-28 00:56:14.753103 | orchestrator | Saturday 28 March 2026 00:49:51 +0000 (0:00:03.312) 0:01:41.830 ******** 2026-03-28 00:56:14.753113 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:56:14.753124 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:56:14.753135 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:56:14.753146 | orchestrator | 2026-03-28 00:56:14.753156 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-03-28 00:56:14.753167 | orchestrator | Saturday 28 March 2026 00:49:54 +0000 (0:00:02.883) 0:01:44.713 ******** 2026-03-28 00:56:14.753177 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:56:14.753188 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:56:14.753199 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:56:14.753209 | orchestrator | 2026-03-28 00:56:14.753220 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-03-28 00:56:14.753231 | orchestrator | Saturday 28 March 2026 00:49:58 +0000 (0:00:04.005) 0:01:48.718 ******** 2026-03-28 00:56:14.753241 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:56:14.753252 | orchestrator | 2026-03-28 00:56:14.753262 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-03-28 00:56:14.753273 | orchestrator | Saturday 28 March 2026 00:50:00 +0000 (0:00:01.566) 0:01:50.285 ******** 2026-03-28 00:56:14.753360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 00:56:14.753385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.753405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.753422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 00:56:14.753435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.753454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.753466 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 00:56:14.753485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.753496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.753507 | orchestrator | 2026-03-28 00:56:14.753587 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-03-28 00:56:14.753603 | orchestrator | Saturday 28 March 2026 00:50:08 +0000 (0:00:08.821) 0:01:59.107 ******** 2026-03-28 00:56:14.753621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 00:56:14.753642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.753653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.753674 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.753686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 00:56:14.753698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.753715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.753727 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.753745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 00:56:14.753757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.753775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.753787 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.753798 | orchestrator | 2026-03-28 00:56:14.753808 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-03-28 00:56:14.753819 | orchestrator | Saturday 28 March 2026 00:50:10 +0000 (0:00:01.538) 0:02:00.645 ******** 2026-03-28 00:56:14.753831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-28 00:56:14.753843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-28 00:56:14.753854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-28 00:56:14.753865 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.753881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-28 00:56:14.753892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-28 00:56:14.753903 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.753914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-28 00:56:14.753925 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.753936 | orchestrator | 2026-03-28 00:56:14.753947 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-03-28 00:56:14.753958 | orchestrator | Saturday 28 March 2026 00:50:11 +0000 (0:00:01.142) 0:02:01.788 ******** 2026-03-28 00:56:14.753968 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:56:14.753979 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:56:14.753990 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:56:14.754000 | orchestrator | 2026-03-28 00:56:14.754011 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-03-28 00:56:14.754078 | orchestrator | Saturday 28 March 2026 00:50:13 +0000 (0:00:01.739) 0:02:03.530 ******** 2026-03-28 00:56:14.754088 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:56:14.754098 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:56:14.754108 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:56:14.754117 | orchestrator | 2026-03-28 00:56:14.754126 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-03-28 00:56:14.754136 | orchestrator | Saturday 28 March 2026 00:50:16 +0000 (0:00:02.851) 0:02:06.381 ******** 2026-03-28 00:56:14.754145 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.754155 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.754165 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.754175 | orchestrator | 2026-03-28 00:56:14.754190 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-03-28 00:56:14.754201 | orchestrator | Saturday 28 March 2026 00:50:16 +0000 (0:00:00.416) 0:02:06.798 ******** 2026-03-28 00:56:14.754210 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:56:14.754219 | orchestrator | 2026-03-28 00:56:14.754229 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-03-28 00:56:14.754238 | orchestrator | Saturday 28 March 2026 00:50:17 +0000 (0:00:01.394) 0:02:08.192 ******** 2026-03-28 00:56:14.754249 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-03-28 00:56:14.754260 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-03-28 00:56:14.754275 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-03-28 00:56:14.754286 | orchestrator | 2026-03-28 00:56:14.754295 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-03-28 00:56:14.754310 | orchestrator | Saturday 28 March 2026 00:50:25 +0000 (0:00:07.088) 0:02:15.280 ******** 2026-03-28 00:56:14.754321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-03-28 00:56:14.754331 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.754348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-03-28 00:56:14.754358 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.754368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-03-28 00:56:14.754378 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.754388 | orchestrator | 2026-03-28 00:56:14.754397 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-03-28 00:56:14.754407 | orchestrator | Saturday 28 March 2026 00:50:27 +0000 (0:00:02.424) 0:02:17.704 ******** 2026-03-28 00:56:14.754417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-28 00:56:14.754432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-28 00:56:14.754451 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.754460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-28 00:56:14.754471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-28 00:56:14.754481 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.754490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-28 00:56:14.754539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-28 00:56:14.754552 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.754562 | orchestrator | 2026-03-28 00:56:14.754571 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-03-28 00:56:14.754581 | orchestrator | Saturday 28 March 2026 00:50:29 +0000 (0:00:01.816) 0:02:19.521 ******** 2026-03-28 00:56:14.754590 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.754600 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.754609 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.754619 | orchestrator | 2026-03-28 00:56:14.754628 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-03-28 00:56:14.754638 | orchestrator | Saturday 28 March 2026 00:50:29 +0000 (0:00:00.381) 0:02:19.902 ******** 2026-03-28 00:56:14.754647 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.754657 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.754666 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.754675 | orchestrator | 2026-03-28 00:56:14.754685 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-03-28 00:56:14.754694 | orchestrator | Saturday 28 March 2026 00:50:31 +0000 (0:00:01.402) 0:02:21.305 ******** 2026-03-28 00:56:14.754704 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:56:14.754713 | orchestrator | 2026-03-28 00:56:14.754722 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-03-28 00:56:14.754732 | orchestrator | Saturday 28 March 2026 00:50:31 +0000 (0:00:00.946) 0:02:22.251 ******** 2026-03-28 00:56:14.754742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 00:56:14.754770 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 00:56:14.754781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.754806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.754817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.754828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.754848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.754862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.754879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 00:56:14.754890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.754900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.754917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.754926 | orchestrator | 2026-03-28 00:56:14.754936 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-03-28 00:56:14.754945 | orchestrator | Saturday 28 March 2026 00:50:35 +0000 (0:00:03.836) 0:02:26.088 ******** 2026-03-28 00:56:14.754960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 00:56:14.754971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.754987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.754997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.755013 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.755024 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 00:56:14.755039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.755050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.755066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.755076 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.755086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 00:56:14.755102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.755116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.755127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.755136 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.755146 | orchestrator | 2026-03-28 00:56:14.755156 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-03-28 00:56:14.755166 | orchestrator | Saturday 28 March 2026 00:50:36 +0000 (0:00:00.762) 0:02:26.851 ******** 2026-03-28 00:56:14.755176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-28 00:56:14.755186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-28 00:56:14.755202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-28 00:56:14.755212 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.755222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-28 00:56:14.755232 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.755241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-28 00:56:14.755257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-28 00:56:14.755266 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.755276 | orchestrator | 2026-03-28 00:56:14.755285 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-03-28 00:56:14.755295 | orchestrator | Saturday 28 March 2026 00:50:37 +0000 (0:00:01.336) 0:02:28.187 ******** 2026-03-28 00:56:14.755304 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:56:14.755314 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:56:14.755324 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:56:14.755333 | orchestrator | 2026-03-28 00:56:14.755342 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-03-28 00:56:14.755352 | orchestrator | Saturday 28 March 2026 00:50:39 +0000 (0:00:01.370) 0:02:29.558 ******** 2026-03-28 00:56:14.755361 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:56:14.755371 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:56:14.755380 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:56:14.755390 | orchestrator | 2026-03-28 00:56:14.755399 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-03-28 00:56:14.755409 | orchestrator | Saturday 28 March 2026 00:50:41 +0000 (0:00:02.247) 0:02:31.806 ******** 2026-03-28 00:56:14.755418 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.755428 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.755437 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.755447 | orchestrator | 2026-03-28 00:56:14.755456 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-03-28 00:56:14.755466 | orchestrator | Saturday 28 March 2026 00:50:41 +0000 (0:00:00.340) 0:02:32.146 ******** 2026-03-28 00:56:14.755475 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.755485 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.755494 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.755503 | orchestrator | 2026-03-28 00:56:14.755513 | orchestrator | TASK [include_role : designate] ************************************************ 2026-03-28 00:56:14.755545 | orchestrator | Saturday 28 March 2026 00:50:42 +0000 (0:00:00.536) 0:02:32.683 ******** 2026-03-28 00:56:14.755556 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:56:14.755565 | orchestrator | 2026-03-28 00:56:14.755580 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-03-28 00:56:14.755589 | orchestrator | Saturday 28 March 2026 00:50:43 +0000 (0:00:00.828) 0:02:33.512 ******** 2026-03-28 00:56:14.755599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 00:56:14.755616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-28 00:56:14.755633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.755643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 00:56:14.755653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.755668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-28 00:56:14.755678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.755705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.755716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.755726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.755736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.755754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.755764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.755774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.755796 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 00:56:14.755806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-28 00:56:14.755816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.755830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.755841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.755850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.755871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.755881 | orchestrator | 2026-03-28 00:56:14.755891 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-03-28 00:56:14.755900 | orchestrator | Saturday 28 March 2026 00:50:47 +0000 (0:00:03.932) 0:02:37.444 ******** 2026-03-28 00:56:14.755910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 00:56:14.755920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-28 00:56:14.755935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.755945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.755962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.755977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.755987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.755997 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.756007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 00:56:14.756022 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-28 00:56:14.756032 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.756049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.756065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.756082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.756099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.756113 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.756132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 00:56:14.756169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-28 00:56:14.756189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.756215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.756231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.756247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.756263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.756279 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.756293 | orchestrator | 2026-03-28 00:56:14.756308 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-03-28 00:56:14.756334 | orchestrator | Saturday 28 March 2026 00:50:48 +0000 (0:00:00.973) 0:02:38.418 ******** 2026-03-28 00:56:14.756357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-03-28 00:56:14.756375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-03-28 00:56:14.756393 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.756408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-03-28 00:56:14.756425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-03-28 00:56:14.756441 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.756452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-03-28 00:56:14.756462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-03-28 00:56:14.756472 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.756481 | orchestrator | 2026-03-28 00:56:14.756498 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-03-28 00:56:14.756508 | orchestrator | Saturday 28 March 2026 00:50:49 +0000 (0:00:00.982) 0:02:39.401 ******** 2026-03-28 00:56:14.756517 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:56:14.756560 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:56:14.756575 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:56:14.756585 | orchestrator | 2026-03-28 00:56:14.756594 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-03-28 00:56:14.756604 | orchestrator | Saturday 28 March 2026 00:50:50 +0000 (0:00:01.179) 0:02:40.581 ******** 2026-03-28 00:56:14.756613 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:56:14.756623 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:56:14.756632 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:56:14.756642 | orchestrator | 2026-03-28 00:56:14.756652 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-03-28 00:56:14.756661 | orchestrator | Saturday 28 March 2026 00:50:52 +0000 (0:00:02.379) 0:02:42.961 ******** 2026-03-28 00:56:14.756670 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.756680 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.756689 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.756699 | orchestrator | 2026-03-28 00:56:14.756708 | orchestrator | TASK [include_role : glance] *************************************************** 2026-03-28 00:56:14.756718 | orchestrator | Saturday 28 March 2026 00:50:53 +0000 (0:00:00.370) 0:02:43.331 ******** 2026-03-28 00:56:14.756727 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:56:14.756736 | orchestrator | 2026-03-28 00:56:14.756746 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-03-28 00:56:14.756755 | orchestrator | Saturday 28 March 2026 00:50:54 +0000 (0:00:01.078) 0:02:44.410 ******** 2026-03-28 00:56:14.756773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-28 00:56:14.756801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-28 00:56:14.756814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-28 00:56:14.756847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-28 00:56:14.756859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-28 00:56:14.756880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-28 00:56:14.756892 | orchestrator | 2026-03-28 00:56:14.756907 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-03-28 00:56:14.756917 | orchestrator | Saturday 28 March 2026 00:50:59 +0000 (0:00:05.499) 0:02:49.909 ******** 2026-03-28 00:56:14.756928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-28 00:56:14.756950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-28 00:56:14.756961 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.757456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-28 00:56:14.757609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-28 00:56:14.757645 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.757679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-28 00:56:14.757708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-28 00:56:14.757720 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.757730 | orchestrator | 2026-03-28 00:56:14.757739 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-03-28 00:56:14.757750 | orchestrator | Saturday 28 March 2026 00:51:03 +0000 (0:00:03.965) 0:02:53.875 ******** 2026-03-28 00:56:14.757761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-28 00:56:14.757779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-28 00:56:14.757790 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.757801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-28 00:56:14.757817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-28 00:56:14.757827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-28 00:56:14.757837 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.757847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-28 00:56:14.757857 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.757867 | orchestrator | 2026-03-28 00:56:14.757877 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-03-28 00:56:14.757890 | orchestrator | Saturday 28 March 2026 00:51:07 +0000 (0:00:04.124) 0:02:58.000 ******** 2026-03-28 00:56:14.757900 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:56:14.757910 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:56:14.757919 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:56:14.757929 | orchestrator | 2026-03-28 00:56:14.757938 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-03-28 00:56:14.757948 | orchestrator | Saturday 28 March 2026 00:51:09 +0000 (0:00:01.378) 0:02:59.378 ******** 2026-03-28 00:56:14.757958 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:56:14.757967 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:56:14.757976 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:56:14.757986 | orchestrator | 2026-03-28 00:56:14.757995 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-03-28 00:56:14.758005 | orchestrator | Saturday 28 March 2026 00:51:11 +0000 (0:00:02.391) 0:03:01.770 ******** 2026-03-28 00:56:14.758014 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.758070 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.758082 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.758093 | orchestrator | 2026-03-28 00:56:14.758106 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-03-28 00:56:14.758117 | orchestrator | Saturday 28 March 2026 00:51:11 +0000 (0:00:00.420) 0:03:02.190 ******** 2026-03-28 00:56:14.758128 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:56:14.758139 | orchestrator | 2026-03-28 00:56:14.758150 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-03-28 00:56:14.758162 | orchestrator | Saturday 28 March 2026 00:51:13 +0000 (0:00:01.426) 0:03:03.617 ******** 2026-03-28 00:56:14.758182 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 00:56:14.758202 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 00:56:14.758214 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 00:56:14.758225 | orchestrator | 2026-03-28 00:56:14.758236 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-03-28 00:56:14.758247 | orchestrator | Saturday 28 March 2026 00:51:16 +0000 (0:00:03.356) 0:03:06.973 ******** 2026-03-28 00:56:14.758264 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 00:56:14.758277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 00:56:14.758287 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.758303 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.758319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 00:56:14.758329 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.758339 | orchestrator | 2026-03-28 00:56:14.758349 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-03-28 00:56:14.758358 | orchestrator | Saturday 28 March 2026 00:51:17 +0000 (0:00:00.398) 0:03:07.372 ******** 2026-03-28 00:56:14.758369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-03-28 00:56:14.758380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-03-28 00:56:14.758391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-03-28 00:56:14.758401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-03-28 00:56:14.758411 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.758421 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.758430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-03-28 00:56:14.758440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-03-28 00:56:14.758450 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.758460 | orchestrator | 2026-03-28 00:56:14.758469 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-03-28 00:56:14.758479 | orchestrator | Saturday 28 March 2026 00:51:17 +0000 (0:00:00.853) 0:03:08.225 ******** 2026-03-28 00:56:14.758489 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:56:14.758499 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:56:14.758508 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:56:14.758518 | orchestrator | 2026-03-28 00:56:14.758553 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-03-28 00:56:14.758563 | orchestrator | Saturday 28 March 2026 00:51:19 +0000 (0:00:01.257) 0:03:09.483 ******** 2026-03-28 00:56:14.758573 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:56:14.758582 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:56:14.758597 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:56:14.758607 | orchestrator | 2026-03-28 00:56:14.758617 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-03-28 00:56:14.758627 | orchestrator | Saturday 28 March 2026 00:51:21 +0000 (0:00:02.546) 0:03:12.029 ******** 2026-03-28 00:56:14.758645 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.758662 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.758677 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.758691 | orchestrator | 2026-03-28 00:56:14.758707 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-03-28 00:56:14.758722 | orchestrator | Saturday 28 March 2026 00:51:22 +0000 (0:00:00.384) 0:03:12.413 ******** 2026-03-28 00:56:14.758737 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:56:14.758752 | orchestrator | 2026-03-28 00:56:14.758767 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-03-28 00:56:14.758784 | orchestrator | Saturday 28 March 2026 00:51:23 +0000 (0:00:01.309) 0:03:13.723 ******** 2026-03-28 00:56:14.758813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-28 00:56:14.758843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-28 00:56:14.758883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-28 00:56:14.758896 | orchestrator | 2026-03-28 00:56:14.758905 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-03-28 00:56:14.758915 | orchestrator | Saturday 28 March 2026 00:51:27 +0000 (0:00:04.354) 0:03:18.077 ******** 2026-03-28 00:56:14.758938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-28 00:56:14.758966 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.758989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-28 00:56:14.759027 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.759075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-28 00:56:14.759095 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.759110 | orchestrator | 2026-03-28 00:56:14.759125 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-03-28 00:56:14.759141 | orchestrator | Saturday 28 March 2026 00:51:28 +0000 (0:00:00.809) 0:03:18.887 ******** 2026-03-28 00:56:14.759157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-03-28 00:56:14.759175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-28 00:56:14.759192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-03-28 00:56:14.759210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-28 00:56:14.759239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-28 00:56:14.759255 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.759266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-03-28 00:56:14.759282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-28 00:56:14.759292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-03-28 00:56:14.759302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-28 00:56:14.759312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-28 00:56:14.759322 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.759339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-03-28 00:56:14.759350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-28 00:56:14.759360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-03-28 00:56:14.759370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-28 00:56:14.759379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-28 00:56:14.759389 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.759399 | orchestrator | 2026-03-28 00:56:14.759408 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-03-28 00:56:14.759418 | orchestrator | Saturday 28 March 2026 00:51:31 +0000 (0:00:02.767) 0:03:21.655 ******** 2026-03-28 00:56:14.759435 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:56:14.759445 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:56:14.759455 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:56:14.759464 | orchestrator | 2026-03-28 00:56:14.759476 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-03-28 00:56:14.759492 | orchestrator | Saturday 28 March 2026 00:51:32 +0000 (0:00:01.433) 0:03:23.088 ******** 2026-03-28 00:56:14.759509 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:56:14.759553 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:56:14.759571 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:56:14.759589 | orchestrator | 2026-03-28 00:56:14.759607 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-03-28 00:56:14.759626 | orchestrator | Saturday 28 March 2026 00:51:35 +0000 (0:00:03.035) 0:03:26.123 ******** 2026-03-28 00:56:14.759645 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.759664 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.759682 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.759701 | orchestrator | 2026-03-28 00:56:14.759719 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-03-28 00:56:14.759737 | orchestrator | Saturday 28 March 2026 00:51:36 +0000 (0:00:00.542) 0:03:26.665 ******** 2026-03-28 00:56:14.759753 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.759770 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.759787 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.759805 | orchestrator | 2026-03-28 00:56:14.759823 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-03-28 00:56:14.759841 | orchestrator | Saturday 28 March 2026 00:51:37 +0000 (0:00:00.662) 0:03:27.328 ******** 2026-03-28 00:56:14.759866 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:56:14.759884 | orchestrator | 2026-03-28 00:56:14.759900 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-03-28 00:56:14.759918 | orchestrator | Saturday 28 March 2026 00:51:39 +0000 (0:00:02.239) 0:03:29.568 ******** 2026-03-28 00:56:14.759936 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-28 00:56:14.759969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 00:56:14.759986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-28 00:56:14.760017 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-28 00:56:14.760043 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-28 00:56:14.760061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 00:56:14.760086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-28 00:56:14.760104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 00:56:14.760131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-28 00:56:14.760149 | orchestrator | 2026-03-28 00:56:14.760165 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-03-28 00:56:14.760182 | orchestrator | Saturday 28 March 2026 00:51:45 +0000 (0:00:06.268) 0:03:35.836 ******** 2026-03-28 00:56:14.760205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-03-28 00:56:14.760224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 00:56:14.760251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-03-28 00:56:14.760281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 00:56:14.760298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-28 00:56:14.760314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-28 00:56:14.760331 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.760346 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.760370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-03-28 00:56:14.760388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 00:56:14.760414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-28 00:56:14.760440 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.760456 | orchestrator | 2026-03-28 00:56:14.760471 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-03-28 00:56:14.760487 | orchestrator | Saturday 28 March 2026 00:51:46 +0000 (0:00:00.970) 0:03:36.807 ******** 2026-03-28 00:56:14.760503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-03-28 00:56:14.760587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-03-28 00:56:14.760612 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.760655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-03-28 00:56:14.760673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-03-28 00:56:14.760688 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.760704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-03-28 00:56:14.760720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-03-28 00:56:14.760735 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.760751 | orchestrator | 2026-03-28 00:56:14.760766 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-03-28 00:56:14.760792 | orchestrator | Saturday 28 March 2026 00:51:48 +0000 (0:00:01.727) 0:03:38.535 ******** 2026-03-28 00:56:14.760808 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:56:14.760823 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:56:14.760839 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:56:14.760855 | orchestrator | 2026-03-28 00:56:14.760870 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-03-28 00:56:14.760886 | orchestrator | Saturday 28 March 2026 00:51:49 +0000 (0:00:01.721) 0:03:40.256 ******** 2026-03-28 00:56:14.760902 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:56:14.760919 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:56:14.760935 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:56:14.760951 | orchestrator | 2026-03-28 00:56:14.760966 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-03-28 00:56:14.760982 | orchestrator | Saturday 28 March 2026 00:51:53 +0000 (0:00:03.708) 0:03:43.965 ******** 2026-03-28 00:56:14.760998 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.761029 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.761046 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.761061 | orchestrator | 2026-03-28 00:56:14.761077 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-03-28 00:56:14.761092 | orchestrator | Saturday 28 March 2026 00:51:54 +0000 (0:00:00.372) 0:03:44.338 ******** 2026-03-28 00:56:14.761107 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:56:14.761120 | orchestrator | 2026-03-28 00:56:14.761134 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-03-28 00:56:14.761147 | orchestrator | Saturday 28 March 2026 00:51:55 +0000 (0:00:01.750) 0:03:46.088 ******** 2026-03-28 00:56:14.761179 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 00:56:14.761195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.761211 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 00:56:14.761240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.761265 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 00:56:14.761288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.761302 | orchestrator | 2026-03-28 00:56:14.761315 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-03-28 00:56:14.761328 | orchestrator | Saturday 28 March 2026 00:52:01 +0000 (0:00:05.814) 0:03:51.902 ******** 2026-03-28 00:56:14.761337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 00:56:14.761350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.761365 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.761374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 00:56:14.761388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.761397 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.761405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 00:56:14.761414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.761422 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.761431 | orchestrator | 2026-03-28 00:56:14.761439 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-03-28 00:56:14.761447 | orchestrator | Saturday 28 March 2026 00:52:02 +0000 (0:00:01.348) 0:03:53.251 ******** 2026-03-28 00:56:14.761461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-03-28 00:56:14.761474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-03-28 00:56:14.761483 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.761491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-03-28 00:56:14.761499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-03-28 00:56:14.761507 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.761515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-03-28 00:56:14.761550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-03-28 00:56:14.761566 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.761581 | orchestrator | 2026-03-28 00:56:14.761594 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-03-28 00:56:14.761607 | orchestrator | Saturday 28 March 2026 00:52:05 +0000 (0:00:02.268) 0:03:55.519 ******** 2026-03-28 00:56:14.761624 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:56:14.761632 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:56:14.761640 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:56:14.761648 | orchestrator | 2026-03-28 00:56:14.761655 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-03-28 00:56:14.761663 | orchestrator | Saturday 28 March 2026 00:52:06 +0000 (0:00:01.426) 0:03:56.946 ******** 2026-03-28 00:56:14.761671 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:56:14.761679 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:56:14.761686 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:56:14.761694 | orchestrator | 2026-03-28 00:56:14.761702 | orchestrator | TASK [include_role : manila] *************************************************** 2026-03-28 00:56:14.761710 | orchestrator | Saturday 28 March 2026 00:52:09 +0000 (0:00:02.737) 0:03:59.684 ******** 2026-03-28 00:56:14.761718 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:56:14.761725 | orchestrator | 2026-03-28 00:56:14.761733 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-03-28 00:56:14.761741 | orchestrator | Saturday 28 March 2026 00:52:10 +0000 (0:00:01.370) 0:04:01.055 ******** 2026-03-28 00:56:14.761750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 00:56:14.761765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.761779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.761788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.761801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 00:56:14.761810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.761822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.761849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.761873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 00:56:14.761887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.761908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.761920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.761933 | orchestrator | 2026-03-28 00:56:14.761946 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-03-28 00:56:14.761969 | orchestrator | Saturday 28 March 2026 00:52:14 +0000 (0:00:03.508) 0:04:04.564 ******** 2026-03-28 00:56:14.761983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 00:56:14.762002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.762011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.762051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.762059 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.762075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 00:56:14.762090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.762098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.762110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.762118 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.762126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 00:56:14.762140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.762148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.762161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.762170 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.762177 | orchestrator | 2026-03-28 00:56:14.762185 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-03-28 00:56:14.762194 | orchestrator | Saturday 28 March 2026 00:52:14 +0000 (0:00:00.670) 0:04:05.234 ******** 2026-03-28 00:56:14.762202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-03-28 00:56:14.762210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-03-28 00:56:14.762218 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.762226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-03-28 00:56:14.762241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-03-28 00:56:14.762249 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.762257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-03-28 00:56:14.762265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-03-28 00:56:14.762274 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.762281 | orchestrator | 2026-03-28 00:56:14.762289 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-03-28 00:56:14.762297 | orchestrator | Saturday 28 March 2026 00:52:16 +0000 (0:00:01.220) 0:04:06.455 ******** 2026-03-28 00:56:14.762305 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:56:14.762313 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:56:14.762320 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:56:14.762328 | orchestrator | 2026-03-28 00:56:14.762336 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-03-28 00:56:14.762343 | orchestrator | Saturday 28 March 2026 00:52:17 +0000 (0:00:01.324) 0:04:07.780 ******** 2026-03-28 00:56:14.762351 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:56:14.762359 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:56:14.762367 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:56:14.762375 | orchestrator | 2026-03-28 00:56:14.762382 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-03-28 00:56:14.762395 | orchestrator | Saturday 28 March 2026 00:52:19 +0000 (0:00:02.355) 0:04:10.135 ******** 2026-03-28 00:56:14.762407 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:56:14.762416 | orchestrator | 2026-03-28 00:56:14.762423 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-03-28 00:56:14.762431 | orchestrator | Saturday 28 March 2026 00:52:21 +0000 (0:00:01.542) 0:04:11.677 ******** 2026-03-28 00:56:14.762439 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-28 00:56:14.762447 | orchestrator | 2026-03-28 00:56:14.762455 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-03-28 00:56:14.762463 | orchestrator | Saturday 28 March 2026 00:52:24 +0000 (0:00:03.449) 0:04:15.127 ******** 2026-03-28 00:56:14.762472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 00:56:14.762485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-28 00:56:14.762494 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.762508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 00:56:14.762543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-28 00:56:14.762553 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.762566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 00:56:14.762575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-28 00:56:14.762588 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.762596 | orchestrator | 2026-03-28 00:56:14.762604 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-03-28 00:56:14.762612 | orchestrator | Saturday 28 March 2026 00:52:27 +0000 (0:00:02.791) 0:04:17.918 ******** 2026-03-28 00:56:14.762626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 00:56:14.762635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-28 00:56:14.762644 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.762656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 00:56:14.762675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-28 00:56:14.762684 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.762692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 00:56:14.762705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-28 00:56:14.762714 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.762722 | orchestrator | 2026-03-28 00:56:14.762736 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-03-28 00:56:14.762744 | orchestrator | Saturday 28 March 2026 00:52:29 +0000 (0:00:02.069) 0:04:19.988 ******** 2026-03-28 00:56:14.762752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-28 00:56:14.762765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-28 00:56:14.762774 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.762782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-28 00:56:14.762790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-28 00:56:14.762803 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.762811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-28 00:56:14.762820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-28 00:56:14.762828 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.762840 | orchestrator | 2026-03-28 00:56:14.762848 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-03-28 00:56:14.762856 | orchestrator | Saturday 28 March 2026 00:52:32 +0000 (0:00:02.402) 0:04:22.390 ******** 2026-03-28 00:56:14.762864 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:56:14.762872 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:56:14.762879 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:56:14.762887 | orchestrator | 2026-03-28 00:56:14.762895 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-03-28 00:56:14.762903 | orchestrator | Saturday 28 March 2026 00:52:34 +0000 (0:00:02.142) 0:04:24.533 ******** 2026-03-28 00:56:14.762910 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.762918 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.762926 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.762934 | orchestrator | 2026-03-28 00:56:14.762942 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-03-28 00:56:14.762949 | orchestrator | Saturday 28 March 2026 00:52:36 +0000 (0:00:02.038) 0:04:26.572 ******** 2026-03-28 00:56:14.762957 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.762965 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.762973 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.762981 | orchestrator | 2026-03-28 00:56:14.762988 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-03-28 00:56:14.762996 | orchestrator | Saturday 28 March 2026 00:52:37 +0000 (0:00:01.060) 0:04:27.632 ******** 2026-03-28 00:56:14.763004 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:56:14.763012 | orchestrator | 2026-03-28 00:56:14.763019 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-03-28 00:56:14.763028 | orchestrator | Saturday 28 March 2026 00:52:38 +0000 (0:00:01.542) 0:04:29.175 ******** 2026-03-28 00:56:14.763041 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-28 00:56:14.763050 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-28 00:56:14.763077 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-28 00:56:14.763091 | orchestrator | 2026-03-28 00:56:14.763099 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-03-28 00:56:14.763107 | orchestrator | Saturday 28 March 2026 00:52:41 +0000 (0:00:02.214) 0:04:31.389 ******** 2026-03-28 00:56:14.763118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-28 00:56:14.763127 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.763135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-28 00:56:14.763143 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.763166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-28 00:56:14.763175 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.763183 | orchestrator | 2026-03-28 00:56:14.763190 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-03-28 00:56:14.763198 | orchestrator | Saturday 28 March 2026 00:52:41 +0000 (0:00:00.450) 0:04:31.840 ******** 2026-03-28 00:56:14.763206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-28 00:56:14.763216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-28 00:56:14.763224 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.763232 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.763240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-28 00:56:14.763254 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.763261 | orchestrator | 2026-03-28 00:56:14.763269 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-03-28 00:56:14.763277 | orchestrator | Saturday 28 March 2026 00:52:42 +0000 (0:00:00.671) 0:04:32.511 ******** 2026-03-28 00:56:14.763285 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.763292 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.763300 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.763308 | orchestrator | 2026-03-28 00:56:14.763316 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-03-28 00:56:14.763324 | orchestrator | Saturday 28 March 2026 00:52:42 +0000 (0:00:00.432) 0:04:32.944 ******** 2026-03-28 00:56:14.763332 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.763340 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.763348 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.763355 | orchestrator | 2026-03-28 00:56:14.763363 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-03-28 00:56:14.763371 | orchestrator | Saturday 28 March 2026 00:52:44 +0000 (0:00:01.693) 0:04:34.638 ******** 2026-03-28 00:56:14.763379 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.763386 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.763394 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.763402 | orchestrator | 2026-03-28 00:56:14.763416 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-03-28 00:56:14.763424 | orchestrator | Saturday 28 March 2026 00:52:45 +0000 (0:00:01.042) 0:04:35.680 ******** 2026-03-28 00:56:14.763432 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:56:14.763440 | orchestrator | 2026-03-28 00:56:14.763447 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-03-28 00:56:14.763455 | orchestrator | Saturday 28 March 2026 00:52:47 +0000 (0:00:01.600) 0:04:37.281 ******** 2026-03-28 00:56:14.763463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 00:56:14.763479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.763494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-03-28 00:56:14.763506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-03-28 00:56:14.763515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.763588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-28 00:56:14.763605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-28 00:56:14.763614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-28 00:56:14.763629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 00:56:14.763637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.763651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-03-28 00:56:14.763659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-28 00:56:14.763667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.763681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-28 00:56:14.763694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-28 00:56:14.763703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 00:56:14.763715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.763728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-03-28 00:56:14.763741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 00:56:14.763750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-03-28 00:56:14.763763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.763772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.763784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-03-28 00:56:14.763798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-28 00:56:14.763806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-03-28 00:56:14.763815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.763827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-28 00:56:14.763836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-28 00:56:14.763848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-28 00:56:14.763864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-28 00:56:14.763872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 00:56:14.763881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-28 00:56:14.763892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.763901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-03-28 00:56:14.763909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 00:56:14.763928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-28 00:56:14.763937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.763945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.763953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-03-28 00:56:14.763966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-28 00:56:14.763975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-28 00:56:14.763996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-28 00:56:14.764005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.764013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-28 00:56:14.764025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-28 00:56:14.764033 | orchestrator | 2026-03-28 00:56:14.764042 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-03-28 00:56:14.764049 | orchestrator | Saturday 28 March 2026 00:52:56 +0000 (0:00:09.055) 0:04:46.336 ******** 2026-03-28 00:56:14.764058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 00:56:14.764075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.764084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-03-28 00:56:14.764092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-03-28 00:56:14.764105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.764112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-28 00:56:14.764123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-28 00:56:14.764134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-28 00:56:14.764141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 00:56:14.764148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.764158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-03-28 00:56:14.764165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-28 00:56:14.764180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 00:56:14.764187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.764194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-28 00:56:14.764202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.764212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-28 00:56:14.764224 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.764235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-03-28 00:56:14.764242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-03-28 00:56:14.764249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.764256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-28 00:56:14.764266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-28 00:56:14.764274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-28 00:56:14.764291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 00:56:14.764299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 00:56:14.764306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.764313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.764323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-03-28 00:56:14.764336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-03-28 00:56:14.764347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-28 00:56:14.764355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-03-28 00:56:14.764362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.764372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.764384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-28 00:56:14.764391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-28 00:56:14.764403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-28 00:56:14.764410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-28 00:56:14.764417 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.764424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-28 00:56:14.764436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 00:56:14.764448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.764455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-03-28 00:56:14.764636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-28 00:56:14.764652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.764659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-28 00:56:14.764667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-28 00:56:14.764686 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.764694 | orchestrator | 2026-03-28 00:56:14.764700 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-03-28 00:56:14.764707 | orchestrator | Saturday 28 March 2026 00:52:57 +0000 (0:00:01.910) 0:04:48.247 ******** 2026-03-28 00:56:14.764714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-03-28 00:56:14.764727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-03-28 00:56:14.764734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-03-28 00:56:14.764741 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.764748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-03-28 00:56:14.764755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-03-28 00:56:14.764762 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.764773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-03-28 00:56:14.764780 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.764787 | orchestrator | 2026-03-28 00:56:14.764794 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-03-28 00:56:14.764800 | orchestrator | Saturday 28 March 2026 00:52:59 +0000 (0:00:01.592) 0:04:49.839 ******** 2026-03-28 00:56:14.764807 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:56:14.764814 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:56:14.764820 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:56:14.764827 | orchestrator | 2026-03-28 00:56:14.764834 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-03-28 00:56:14.764840 | orchestrator | Saturday 28 March 2026 00:53:01 +0000 (0:00:01.568) 0:04:51.407 ******** 2026-03-28 00:56:14.764847 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:56:14.764854 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:56:14.764860 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:56:14.764867 | orchestrator | 2026-03-28 00:56:14.764874 | orchestrator | TASK [include_role : placement] ************************************************ 2026-03-28 00:56:14.764881 | orchestrator | Saturday 28 March 2026 00:53:03 +0000 (0:00:02.103) 0:04:53.511 ******** 2026-03-28 00:56:14.764887 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:56:14.764894 | orchestrator | 2026-03-28 00:56:14.764901 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-03-28 00:56:14.764913 | orchestrator | Saturday 28 March 2026 00:53:04 +0000 (0:00:01.568) 0:04:55.079 ******** 2026-03-28 00:56:14.764920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-28 00:56:14.764931 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-28 00:56:14.764943 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-28 00:56:14.764951 | orchestrator | 2026-03-28 00:56:14.764958 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-03-28 00:56:14.764965 | orchestrator | Saturday 28 March 2026 00:53:09 +0000 (0:00:04.376) 0:04:59.456 ******** 2026-03-28 00:56:14.764972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-28 00:56:14.764985 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.764992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-28 00:56:14.764999 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.765009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-28 00:56:14.765017 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.765024 | orchestrator | 2026-03-28 00:56:14.765031 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-03-28 00:56:14.765038 | orchestrator | Saturday 28 March 2026 00:53:10 +0000 (0:00:01.454) 0:05:00.910 ******** 2026-03-28 00:56:14.765045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-28 00:56:14.765057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-28 00:56:14.765064 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.765071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-28 00:56:14.765078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-28 00:56:14.765090 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.765097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-28 00:56:14.765105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-28 00:56:14.765112 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.765119 | orchestrator | 2026-03-28 00:56:14.765126 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-03-28 00:56:14.765132 | orchestrator | Saturday 28 March 2026 00:53:11 +0000 (0:00:00.891) 0:05:01.801 ******** 2026-03-28 00:56:14.765139 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:56:14.765146 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:56:14.765153 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:56:14.765159 | orchestrator | 2026-03-28 00:56:14.765166 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-03-28 00:56:14.765173 | orchestrator | Saturday 28 March 2026 00:53:12 +0000 (0:00:01.273) 0:05:03.075 ******** 2026-03-28 00:56:14.765179 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:56:14.765186 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:56:14.765193 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:56:14.765200 | orchestrator | 2026-03-28 00:56:14.765206 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-03-28 00:56:14.765213 | orchestrator | Saturday 28 March 2026 00:53:15 +0000 (0:00:02.228) 0:05:05.304 ******** 2026-03-28 00:56:14.765220 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:56:14.765227 | orchestrator | 2026-03-28 00:56:14.765235 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-03-28 00:56:14.765243 | orchestrator | Saturday 28 March 2026 00:53:16 +0000 (0:00:01.657) 0:05:06.961 ******** 2026-03-28 00:56:14.765256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 00:56:14.765270 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 00:56:14.765285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 00:56:14.765294 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 00:56:14.765306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.765315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.765326 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 00:56:14.765343 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 00:56:14.765352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.765365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.765373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.765385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.765397 | orchestrator | 2026-03-28 00:56:14.765406 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-03-28 00:56:14.765414 | orchestrator | Saturday 28 March 2026 00:53:22 +0000 (0:00:06.197) 0:05:13.158 ******** 2026-03-28 00:56:14.765422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 00:56:14.765431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 00:56:14.765445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.765454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.765467 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.765479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 00:56:14.765488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 00:56:14.765499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 00:56:14.765508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.765531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 00:56:14.765544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.765552 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.765561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.765569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.765577 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.765585 | orchestrator | 2026-03-28 00:56:14.765592 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-03-28 00:56:14.765599 | orchestrator | Saturday 28 March 2026 00:53:24 +0000 (0:00:01.707) 0:05:14.866 ******** 2026-03-28 00:56:14.765606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-28 00:56:14.765616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-28 00:56:14.765624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-28 00:56:14.765636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-28 00:56:14.765643 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.765649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-28 00:56:14.765656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-28 00:56:14.765663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-28 00:56:14.765674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-28 00:56:14.765681 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.765688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-28 00:56:14.765695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-28 00:56:14.765702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-28 00:56:14.765708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-28 00:56:14.765715 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.765721 | orchestrator | 2026-03-28 00:56:14.765728 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-03-28 00:56:14.765735 | orchestrator | Saturday 28 March 2026 00:53:26 +0000 (0:00:01.752) 0:05:16.618 ******** 2026-03-28 00:56:14.765742 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:56:14.765749 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:56:14.765755 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:56:14.765762 | orchestrator | 2026-03-28 00:56:14.765769 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-03-28 00:56:14.765775 | orchestrator | Saturday 28 March 2026 00:53:27 +0000 (0:00:01.349) 0:05:17.968 ******** 2026-03-28 00:56:14.765782 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:56:14.765789 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:56:14.765796 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:56:14.765802 | orchestrator | 2026-03-28 00:56:14.765809 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-03-28 00:56:14.765816 | orchestrator | Saturday 28 March 2026 00:53:30 +0000 (0:00:02.415) 0:05:20.383 ******** 2026-03-28 00:56:14.765823 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:56:14.765835 | orchestrator | 2026-03-28 00:56:14.765841 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-03-28 00:56:14.765848 | orchestrator | Saturday 28 March 2026 00:53:31 +0000 (0:00:01.632) 0:05:22.016 ******** 2026-03-28 00:56:14.765854 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-03-28 00:56:14.765861 | orchestrator | 2026-03-28 00:56:14.765868 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-03-28 00:56:14.765878 | orchestrator | Saturday 28 March 2026 00:53:33 +0000 (0:00:01.300) 0:05:23.317 ******** 2026-03-28 00:56:14.765885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-28 00:56:14.765892 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-28 00:56:14.765903 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-28 00:56:14.765910 | orchestrator | 2026-03-28 00:56:14.765917 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-03-28 00:56:14.765924 | orchestrator | Saturday 28 March 2026 00:53:39 +0000 (0:00:06.142) 0:05:29.459 ******** 2026-03-28 00:56:14.765930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-28 00:56:14.765937 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.765944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-28 00:56:14.765951 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.765958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-28 00:56:14.765969 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.765976 | orchestrator | 2026-03-28 00:56:14.765982 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-03-28 00:56:14.765989 | orchestrator | Saturday 28 March 2026 00:53:41 +0000 (0:00:02.098) 0:05:31.557 ******** 2026-03-28 00:56:14.765996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-28 00:56:14.766003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-28 00:56:14.766013 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.766043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-28 00:56:14.766051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-28 00:56:14.766058 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.766065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-28 00:56:14.766072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-28 00:56:14.766079 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.766085 | orchestrator | 2026-03-28 00:56:14.766092 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-28 00:56:14.766099 | orchestrator | Saturday 28 March 2026 00:53:44 +0000 (0:00:02.884) 0:05:34.442 ******** 2026-03-28 00:56:14.766105 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:56:14.766112 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:56:14.766118 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:56:14.766125 | orchestrator | 2026-03-28 00:56:14.766132 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-28 00:56:14.766151 | orchestrator | Saturday 28 March 2026 00:53:47 +0000 (0:00:03.401) 0:05:37.844 ******** 2026-03-28 00:56:14.766157 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:56:14.766164 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:56:14.766171 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:56:14.766177 | orchestrator | 2026-03-28 00:56:14.766184 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-03-28 00:56:14.766191 | orchestrator | Saturday 28 March 2026 00:53:51 +0000 (0:00:04.282) 0:05:42.127 ******** 2026-03-28 00:56:14.766197 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-03-28 00:56:14.766204 | orchestrator | 2026-03-28 00:56:14.766211 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-03-28 00:56:14.766217 | orchestrator | Saturday 28 March 2026 00:53:53 +0000 (0:00:01.471) 0:05:43.599 ******** 2026-03-28 00:56:14.766229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-28 00:56:14.766236 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.766243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-28 00:56:14.766250 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.766257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-28 00:56:14.766264 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.766270 | orchestrator | 2026-03-28 00:56:14.766277 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-03-28 00:56:14.766284 | orchestrator | Saturday 28 March 2026 00:53:55 +0000 (0:00:02.476) 0:05:46.076 ******** 2026-03-28 00:56:14.766294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-28 00:56:14.766301 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.766308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-28 00:56:14.766314 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.766332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-28 00:56:14.766339 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.766345 | orchestrator | 2026-03-28 00:56:14.766356 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-03-28 00:56:14.766363 | orchestrator | Saturday 28 March 2026 00:53:58 +0000 (0:00:02.302) 0:05:48.378 ******** 2026-03-28 00:56:14.766370 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.766376 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.766383 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.766389 | orchestrator | 2026-03-28 00:56:14.766396 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-28 00:56:14.766402 | orchestrator | Saturday 28 March 2026 00:54:00 +0000 (0:00:01.966) 0:05:50.345 ******** 2026-03-28 00:56:14.766409 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:56:14.766416 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:56:14.766422 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:56:14.766429 | orchestrator | 2026-03-28 00:56:14.766435 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-28 00:56:14.766442 | orchestrator | Saturday 28 March 2026 00:54:02 +0000 (0:00:02.893) 0:05:53.239 ******** 2026-03-28 00:56:14.766448 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:56:14.766455 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:56:14.766461 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:56:14.766468 | orchestrator | 2026-03-28 00:56:14.766474 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-03-28 00:56:14.766481 | orchestrator | Saturday 28 March 2026 00:54:06 +0000 (0:00:03.652) 0:05:56.891 ******** 2026-03-28 00:56:14.766488 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-03-28 00:56:14.766495 | orchestrator | 2026-03-28 00:56:14.766501 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-03-28 00:56:14.766508 | orchestrator | Saturday 28 March 2026 00:54:08 +0000 (0:00:01.814) 0:05:58.705 ******** 2026-03-28 00:56:14.766515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-28 00:56:14.766564 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.766572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-28 00:56:14.766579 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.766586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-28 00:56:14.766593 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.766599 | orchestrator | 2026-03-28 00:56:14.766606 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-03-28 00:56:14.766613 | orchestrator | Saturday 28 March 2026 00:54:10 +0000 (0:00:01.833) 0:06:00.538 ******** 2026-03-28 00:56:14.766625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-28 00:56:14.766632 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.766643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-28 00:56:14.766650 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.766657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-28 00:56:14.766664 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.766670 | orchestrator | 2026-03-28 00:56:14.766677 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-03-28 00:56:14.766684 | orchestrator | Saturday 28 March 2026 00:54:11 +0000 (0:00:01.367) 0:06:01.906 ******** 2026-03-28 00:56:14.766690 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.766696 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.766702 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.766708 | orchestrator | 2026-03-28 00:56:14.766715 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-28 00:56:14.766721 | orchestrator | Saturday 28 March 2026 00:54:13 +0000 (0:00:02.156) 0:06:04.063 ******** 2026-03-28 00:56:14.766727 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:56:14.766733 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:56:14.766739 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:56:14.766745 | orchestrator | 2026-03-28 00:56:14.766751 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-28 00:56:14.766772 | orchestrator | Saturday 28 March 2026 00:54:16 +0000 (0:00:02.445) 0:06:06.508 ******** 2026-03-28 00:56:14.766779 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:56:14.766785 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:56:14.766791 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:56:14.766797 | orchestrator | 2026-03-28 00:56:14.766803 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-03-28 00:56:14.766809 | orchestrator | Saturday 28 March 2026 00:54:20 +0000 (0:00:03.899) 0:06:10.408 ******** 2026-03-28 00:56:14.766815 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:56:14.766821 | orchestrator | 2026-03-28 00:56:14.766827 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-03-28 00:56:14.766834 | orchestrator | Saturday 28 March 2026 00:54:21 +0000 (0:00:01.499) 0:06:11.907 ******** 2026-03-28 00:56:14.766846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 00:56:14.766858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-28 00:56:14.766869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-28 00:56:14.766876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-28 00:56:14.766883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.766889 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 00:56:14.766903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-28 00:56:14.766909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-28 00:56:14.766919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-28 00:56:14.766925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.766932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 00:56:14.766938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-28 00:56:14.766951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-28 00:56:14.766958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-28 00:56:14.766964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.766971 | orchestrator | 2026-03-28 00:56:14.766977 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-03-28 00:56:14.766986 | orchestrator | Saturday 28 March 2026 00:54:25 +0000 (0:00:03.629) 0:06:15.537 ******** 2026-03-28 00:56:14.766993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-28 00:56:14.767000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-28 00:56:14.767006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-28 00:56:14.767019 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-28 00:56:14.767026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.767032 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.767043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-28 00:56:14.767049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-28 00:56:14.767056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-28 00:56:14.767062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-28 00:56:14.767076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.767082 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.767089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-28 00:56:14.767099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-28 00:56:14.767105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-28 00:56:14.767111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-28 00:56:14.767118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-28 00:56:14.767128 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.767134 | orchestrator | 2026-03-28 00:56:14.767141 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-03-28 00:56:14.767147 | orchestrator | Saturday 28 March 2026 00:54:26 +0000 (0:00:00.832) 0:06:16.370 ******** 2026-03-28 00:56:14.767153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-28 00:56:14.767163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-28 00:56:14.767169 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.767175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-28 00:56:14.767182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-28 00:56:14.767188 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.767194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-28 00:56:14.767203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-28 00:56:14.767212 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.767222 | orchestrator | 2026-03-28 00:56:14.767230 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-03-28 00:56:14.767236 | orchestrator | Saturday 28 March 2026 00:54:27 +0000 (0:00:00.964) 0:06:17.335 ******** 2026-03-28 00:56:14.767243 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:56:14.767249 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:56:14.767255 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:56:14.767261 | orchestrator | 2026-03-28 00:56:14.767267 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-03-28 00:56:14.767273 | orchestrator | Saturday 28 March 2026 00:54:28 +0000 (0:00:01.711) 0:06:19.046 ******** 2026-03-28 00:56:14.767279 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:56:14.767289 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:56:14.767295 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:56:14.767301 | orchestrator | 2026-03-28 00:56:14.767307 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-03-28 00:56:14.767313 | orchestrator | Saturday 28 March 2026 00:54:30 +0000 (0:00:02.174) 0:06:21.221 ******** 2026-03-28 00:56:14.767320 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:56:14.767326 | orchestrator | 2026-03-28 00:56:14.767332 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-03-28 00:56:14.767338 | orchestrator | Saturday 28 March 2026 00:54:32 +0000 (0:00:01.438) 0:06:22.659 ******** 2026-03-28 00:56:14.767348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 00:56:14.767356 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 00:56:14.767365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 00:56:14.767376 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-28 00:56:14.767385 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-28 00:56:14.767396 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-28 00:56:14.767403 | orchestrator | 2026-03-28 00:56:14.767414 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-03-28 00:56:14.767420 | orchestrator | Saturday 28 March 2026 00:54:38 +0000 (0:00:06.227) 0:06:28.887 ******** 2026-03-28 00:56:14.767427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 00:56:14.767438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-28 00:56:14.767448 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.767455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 00:56:14.767465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-28 00:56:14.767472 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.767478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 00:56:14.767490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-28 00:56:14.767500 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.767507 | orchestrator | 2026-03-28 00:56:14.767513 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-03-28 00:56:14.767538 | orchestrator | Saturday 28 March 2026 00:54:39 +0000 (0:00:01.104) 0:06:29.991 ******** 2026-03-28 00:56:14.767544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-03-28 00:56:14.767551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-03-28 00:56:14.767557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-03-28 00:56:14.767564 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.767570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-03-28 00:56:14.767577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-03-28 00:56:14.767586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-03-28 00:56:14.767593 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.767599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-03-28 00:56:14.767605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-03-28 00:56:14.767611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-03-28 00:56:14.767618 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.767628 | orchestrator | 2026-03-28 00:56:14.767635 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-03-28 00:56:14.767641 | orchestrator | Saturday 28 March 2026 00:54:40 +0000 (0:00:01.165) 0:06:31.157 ******** 2026-03-28 00:56:14.767647 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.767653 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.767659 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.767665 | orchestrator | 2026-03-28 00:56:14.767671 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-03-28 00:56:14.767678 | orchestrator | Saturday 28 March 2026 00:54:41 +0000 (0:00:00.535) 0:06:31.692 ******** 2026-03-28 00:56:14.767684 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.767694 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.767700 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.767706 | orchestrator | 2026-03-28 00:56:14.767712 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-03-28 00:56:14.767718 | orchestrator | Saturday 28 March 2026 00:54:42 +0000 (0:00:01.515) 0:06:33.207 ******** 2026-03-28 00:56:14.767724 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:56:14.767730 | orchestrator | 2026-03-28 00:56:14.767736 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-03-28 00:56:14.767742 | orchestrator | Saturday 28 March 2026 00:54:44 +0000 (0:00:01.811) 0:06:35.019 ******** 2026-03-28 00:56:14.767749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-03-28 00:56:14.767756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 00:56:14.767766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:56:14.767773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:56:14.767784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 00:56:14.767795 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-03-28 00:56:14.767802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 00:56:14.767809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:56:14.767815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:56:14.767824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 00:56:14.767831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-03-28 00:56:14.767845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 00:56:14.767852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:56:14.767858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:56:14.767865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 00:56:14.767874 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 00:56:14.767885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-03-28 00:56:14.767896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:56:14.767903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:56:14.767909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-28 00:56:14.767916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 00:56:14.767926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 00:56:14.767940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'ext2026-03-28 00:56:14 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:56:14.767947 | orchestrator | 2026-03-28 00:56:14 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:56:14.768002 | orchestrator | ernal': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-03-28 00:56:14.768010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-03-28 00:56:14.768016 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:56:14.768023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:56:14.768040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:56:14.768046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:56:14.768053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-28 00:56:14.768059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-28 00:56:14.768066 | orchestrator | 2026-03-28 00:56:14.768075 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-03-28 00:56:14.768082 | orchestrator | Saturday 28 March 2026 00:54:49 +0000 (0:00:04.705) 0:06:39.724 ******** 2026-03-28 00:56:14.768088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-03-28 00:56:14.768096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 00:56:14.768107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:56:14.768117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:56:14.768124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 00:56:14.768133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 00:56:14.768140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-03-28 00:56:14.768147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:56:14.768161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-03-28 00:56:14.768168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:56:14.768175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 00:56:14.768184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-28 00:56:14.768191 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.768197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:56:14.768204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:56:14.768210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 00:56:14.768224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 00:56:14.768231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-03-28 00:56:14.768241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:56:14.768247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:56:14.768253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-28 00:56:14.768263 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.768270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-03-28 00:56:14.768280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 00:56:14.768286 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:56:14.768293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:56:14.768302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 00:56:14.768309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 00:56:14.768320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-03-28 00:56:14.768330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:56:14.768336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 00:56:14.768342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-28 00:56:14.768349 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.768355 | orchestrator | 2026-03-28 00:56:14.768361 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-03-28 00:56:14.768368 | orchestrator | Saturday 28 March 2026 00:54:51 +0000 (0:00:01.800) 0:06:41.525 ******** 2026-03-28 00:56:14.768377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-03-28 00:56:14.768384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-03-28 00:56:14.768391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-03-28 00:56:14.768402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-03-28 00:56:14.768408 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.768414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-03-28 00:56:14.768421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-03-28 00:56:14.768430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-03-28 00:56:14.768437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-03-28 00:56:14.768443 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.768449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-03-28 00:56:14.768456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-03-28 00:56:14.768462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-03-28 00:56:14.768472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-03-28 00:56:14.768478 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.768484 | orchestrator | 2026-03-28 00:56:14.768490 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-03-28 00:56:14.768501 | orchestrator | Saturday 28 March 2026 00:54:52 +0000 (0:00:01.177) 0:06:42.703 ******** 2026-03-28 00:56:14.768507 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.768513 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.768535 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.768542 | orchestrator | 2026-03-28 00:56:14.768549 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-03-28 00:56:14.768555 | orchestrator | Saturday 28 March 2026 00:54:52 +0000 (0:00:00.456) 0:06:43.159 ******** 2026-03-28 00:56:14.768561 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.768567 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.768573 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.768579 | orchestrator | 2026-03-28 00:56:14.768585 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-03-28 00:56:14.768592 | orchestrator | Saturday 28 March 2026 00:54:54 +0000 (0:00:01.445) 0:06:44.604 ******** 2026-03-28 00:56:14.768598 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:56:14.768604 | orchestrator | 2026-03-28 00:56:14.768610 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-03-28 00:56:14.768616 | orchestrator | Saturday 28 March 2026 00:54:56 +0000 (0:00:01.846) 0:06:46.451 ******** 2026-03-28 00:56:14.768623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-28 00:56:14.768633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-28 00:56:14.768643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-28 00:56:14.768656 | orchestrator | 2026-03-28 00:56:14.768663 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-03-28 00:56:14.768669 | orchestrator | Saturday 28 March 2026 00:54:59 +0000 (0:00:02.928) 0:06:49.380 ******** 2026-03-28 00:56:14.768675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-28 00:56:14.768682 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.768688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-28 00:56:14.768695 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.768704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-28 00:56:14.768711 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.768718 | orchestrator | 2026-03-28 00:56:14.768724 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-03-28 00:56:14.768730 | orchestrator | Saturday 28 March 2026 00:54:59 +0000 (0:00:00.490) 0:06:49.870 ******** 2026-03-28 00:56:14.768741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-28 00:56:14.768747 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.768753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-28 00:56:14.768760 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.768766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-28 00:56:14.768772 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.768778 | orchestrator | 2026-03-28 00:56:14.768787 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-03-28 00:56:14.768794 | orchestrator | Saturday 28 March 2026 00:55:00 +0000 (0:00:01.138) 0:06:51.008 ******** 2026-03-28 00:56:14.768800 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.768806 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.768812 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.768818 | orchestrator | 2026-03-28 00:56:14.768824 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-03-28 00:56:14.768830 | orchestrator | Saturday 28 March 2026 00:55:01 +0000 (0:00:00.534) 0:06:51.543 ******** 2026-03-28 00:56:14.768836 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.768842 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.768849 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.768855 | orchestrator | 2026-03-28 00:56:14.768861 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-03-28 00:56:14.768867 | orchestrator | Saturday 28 March 2026 00:55:03 +0000 (0:00:01.753) 0:06:53.297 ******** 2026-03-28 00:56:14.768873 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:56:14.768879 | orchestrator | 2026-03-28 00:56:14.768885 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-03-28 00:56:14.768891 | orchestrator | Saturday 28 March 2026 00:55:04 +0000 (0:00:01.940) 0:06:55.238 ******** 2026-03-28 00:56:14.768898 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-03-28 00:56:14.768908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-03-28 00:56:14.768920 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-03-28 00:56:14.768931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-28 00:56:14.768939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-28 00:56:14.768949 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-28 00:56:14.768960 | orchestrator | 2026-03-28 00:56:14.768966 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-03-28 00:56:14.768973 | orchestrator | Saturday 28 March 2026 00:55:11 +0000 (0:00:06.948) 0:07:02.187 ******** 2026-03-28 00:56:14.768982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-03-28 00:56:14.768989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-28 00:56:14.768996 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.769003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-03-28 00:56:14.769013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-28 00:56:14.769023 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.769033 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-03-28 00:56:14.769041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-28 00:56:14.769047 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.769053 | orchestrator | 2026-03-28 00:56:14.769059 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-03-28 00:56:14.769066 | orchestrator | Saturday 28 March 2026 00:55:12 +0000 (0:00:00.754) 0:07:02.942 ******** 2026-03-28 00:56:14.769072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-03-28 00:56:14.769079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-03-28 00:56:14.769085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-28 00:56:14.769099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-28 00:56:14.769106 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.769112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-03-28 00:56:14.769119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-03-28 00:56:14.769126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-28 00:56:14.769132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-28 00:56:14.769138 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.769145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-03-28 00:56:14.769154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-03-28 00:56:14.769161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-28 00:56:14.769167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-28 00:56:14.769173 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.769179 | orchestrator | 2026-03-28 00:56:14.769186 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-03-28 00:56:14.769192 | orchestrator | Saturday 28 March 2026 00:55:13 +0000 (0:00:01.129) 0:07:04.072 ******** 2026-03-28 00:56:14.769198 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:56:14.769204 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:56:14.769211 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:56:14.769217 | orchestrator | 2026-03-28 00:56:14.769223 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-03-28 00:56:14.769229 | orchestrator | Saturday 28 March 2026 00:55:15 +0000 (0:00:01.603) 0:07:05.675 ******** 2026-03-28 00:56:14.769235 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:56:14.769241 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:56:14.769247 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:56:14.769253 | orchestrator | 2026-03-28 00:56:14.769260 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-03-28 00:56:14.769270 | orchestrator | Saturday 28 March 2026 00:55:17 +0000 (0:00:02.210) 0:07:07.886 ******** 2026-03-28 00:56:14.769276 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.769282 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.769288 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.769295 | orchestrator | 2026-03-28 00:56:14.769301 | orchestrator | TASK [include_role : trove] **************************************************** 2026-03-28 00:56:14.769307 | orchestrator | Saturday 28 March 2026 00:55:17 +0000 (0:00:00.356) 0:07:08.243 ******** 2026-03-28 00:56:14.769313 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.769319 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.769325 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.769331 | orchestrator | 2026-03-28 00:56:14.769338 | orchestrator | TASK [include_role : venus] **************************************************** 2026-03-28 00:56:14.769344 | orchestrator | Saturday 28 March 2026 00:55:18 +0000 (0:00:00.354) 0:07:08.598 ******** 2026-03-28 00:56:14.769350 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.769356 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.769362 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.769368 | orchestrator | 2026-03-28 00:56:14.769374 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-03-28 00:56:14.769380 | orchestrator | Saturday 28 March 2026 00:55:18 +0000 (0:00:00.333) 0:07:08.931 ******** 2026-03-28 00:56:14.769386 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.769393 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.769402 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.769408 | orchestrator | 2026-03-28 00:56:14.769415 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-03-28 00:56:14.769421 | orchestrator | Saturday 28 March 2026 00:55:19 +0000 (0:00:00.715) 0:07:09.646 ******** 2026-03-28 00:56:14.769427 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.769433 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.769439 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.769446 | orchestrator | 2026-03-28 00:56:14.769452 | orchestrator | TASK [include_role : loadbalancer] ********************************************* 2026-03-28 00:56:14.769458 | orchestrator | Saturday 28 March 2026 00:55:19 +0000 (0:00:00.377) 0:07:10.023 ******** 2026-03-28 00:56:14.769464 | orchestrator | included: loadbalancer for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:56:14.769470 | orchestrator | 2026-03-28 00:56:14.769476 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-03-28 00:56:14.769482 | orchestrator | Saturday 28 March 2026 00:55:21 +0000 (0:00:02.153) 0:07:12.176 ******** 2026-03-28 00:56:14.769489 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-28 00:56:14.769499 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-28 00:56:14.769510 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-28 00:56:14.769517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 00:56:14.769538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 00:56:14.769547 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-28 00:56:14.769554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-28 00:56:14.769560 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-28 00:56:14.769570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-28 00:56:14.769582 | orchestrator | 2026-03-28 00:56:14.769588 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-03-28 00:56:14.769595 | orchestrator | Saturday 28 March 2026 00:55:24 +0000 (0:00:02.310) 0:07:14.487 ******** 2026-03-28 00:56:14.769601 | orchestrator | changed: [testbed-node-0] => { 2026-03-28 00:56:14.769607 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 00:56:14.769613 | orchestrator | } 2026-03-28 00:56:14.769620 | orchestrator | changed: [testbed-node-1] => { 2026-03-28 00:56:14.769626 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 00:56:14.769632 | orchestrator | } 2026-03-28 00:56:14.769638 | orchestrator | changed: [testbed-node-2] => { 2026-03-28 00:56:14.769644 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 00:56:14.769650 | orchestrator | } 2026-03-28 00:56:14.769656 | orchestrator | 2026-03-28 00:56:14.769662 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-28 00:56:14.769668 | orchestrator | Saturday 28 March 2026 00:55:24 +0000 (0:00:00.414) 0:07:14.902 ******** 2026-03-28 00:56:14.769675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-28 00:56:14.769681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 00:56:14.769691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-28 00:56:14.769698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:56:14.769704 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.769710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 00:56:14.769745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:56:14.769753 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.769759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-28 00:56:14.769766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-28 00:56:14.769772 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-28 00:56:14.769779 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.769785 | orchestrator | 2026-03-28 00:56:14.769791 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-03-28 00:56:14.769798 | orchestrator | Saturday 28 March 2026 00:55:26 +0000 (0:00:01.772) 0:07:16.674 ******** 2026-03-28 00:56:14.769807 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:56:14.769814 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:56:14.769820 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:56:14.769826 | orchestrator | 2026-03-28 00:56:14.769832 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-03-28 00:56:14.769838 | orchestrator | Saturday 28 March 2026 00:55:27 +0000 (0:00:00.677) 0:07:17.352 ******** 2026-03-28 00:56:14.769845 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:56:14.769851 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:56:14.769857 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:56:14.769863 | orchestrator | 2026-03-28 00:56:14.769869 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-03-28 00:56:14.769875 | orchestrator | Saturday 28 March 2026 00:55:27 +0000 (0:00:00.390) 0:07:17.743 ******** 2026-03-28 00:56:14.769881 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:56:14.769895 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:56:14.769901 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:56:14.769907 | orchestrator | 2026-03-28 00:56:14.769914 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-03-28 00:56:14.769920 | orchestrator | Saturday 28 March 2026 00:55:28 +0000 (0:00:01.292) 0:07:19.036 ******** 2026-03-28 00:56:14.769926 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:56:14.769932 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:56:14.769938 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:56:14.769944 | orchestrator | 2026-03-28 00:56:14.769951 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-03-28 00:56:14.769957 | orchestrator | Saturday 28 March 2026 00:55:29 +0000 (0:00:00.905) 0:07:19.942 ******** 2026-03-28 00:56:14.769963 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:56:14.769969 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:56:14.769975 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:56:14.769981 | orchestrator | 2026-03-28 00:56:14.769987 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-03-28 00:56:14.769993 | orchestrator | Saturday 28 March 2026 00:55:30 +0000 (0:00:00.826) 0:07:20.768 ******** 2026-03-28 00:56:14.769999 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:56:14.770005 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:56:14.770011 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:56:14.770039 | orchestrator | 2026-03-28 00:56:14.770046 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-03-28 00:56:14.770052 | orchestrator | Saturday 28 March 2026 00:55:40 +0000 (0:00:09.925) 0:07:30.694 ******** 2026-03-28 00:56:14.770058 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:56:14.770064 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:56:14.770070 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:56:14.770076 | orchestrator | 2026-03-28 00:56:14.770083 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-03-28 00:56:14.770094 | orchestrator | Saturday 28 March 2026 00:55:41 +0000 (0:00:01.272) 0:07:31.966 ******** 2026-03-28 00:56:14.770101 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:56:14.770107 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:56:14.770113 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:56:14.770119 | orchestrator | 2026-03-28 00:56:14.770125 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-03-28 00:56:14.770131 | orchestrator | Saturday 28 March 2026 00:55:51 +0000 (0:00:10.263) 0:07:42.229 ******** 2026-03-28 00:56:14.770137 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:56:14.770143 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:56:14.770149 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:56:14.770155 | orchestrator | 2026-03-28 00:56:14.770161 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-03-28 00:56:14.770167 | orchestrator | Saturday 28 March 2026 00:55:56 +0000 (0:00:04.682) 0:07:46.912 ******** 2026-03-28 00:56:14.770174 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:56:14.770180 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:56:14.770186 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:56:14.770192 | orchestrator | 2026-03-28 00:56:14.770198 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-03-28 00:56:14.770204 | orchestrator | Saturday 28 March 2026 00:56:06 +0000 (0:00:09.785) 0:07:56.697 ******** 2026-03-28 00:56:14.770210 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.770216 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.770223 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.770229 | orchestrator | 2026-03-28 00:56:14.770235 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-03-28 00:56:14.770241 | orchestrator | Saturday 28 March 2026 00:56:07 +0000 (0:00:00.716) 0:07:57.413 ******** 2026-03-28 00:56:14.770248 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.770254 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.770260 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.770271 | orchestrator | 2026-03-28 00:56:14.770277 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-03-28 00:56:14.770283 | orchestrator | Saturday 28 March 2026 00:56:07 +0000 (0:00:00.376) 0:07:57.789 ******** 2026-03-28 00:56:14.770289 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.770295 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.770301 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.770308 | orchestrator | 2026-03-28 00:56:14.770314 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-03-28 00:56:14.770320 | orchestrator | Saturday 28 March 2026 00:56:07 +0000 (0:00:00.377) 0:07:58.167 ******** 2026-03-28 00:56:14.770326 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.770332 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.770338 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.770344 | orchestrator | 2026-03-28 00:56:14.770350 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-03-28 00:56:14.770356 | orchestrator | Saturday 28 March 2026 00:56:08 +0000 (0:00:00.450) 0:07:58.617 ******** 2026-03-28 00:56:14.770362 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.770368 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.770374 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.770380 | orchestrator | 2026-03-28 00:56:14.770386 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-03-28 00:56:14.770392 | orchestrator | Saturday 28 March 2026 00:56:09 +0000 (0:00:00.825) 0:07:59.443 ******** 2026-03-28 00:56:14.770398 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:56:14.770404 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:56:14.770414 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:56:14.770420 | orchestrator | 2026-03-28 00:56:14.770426 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-03-28 00:56:14.770432 | orchestrator | Saturday 28 March 2026 00:56:09 +0000 (0:00:00.474) 0:07:59.918 ******** 2026-03-28 00:56:14.770439 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:56:14.770445 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:56:14.770451 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:56:14.770457 | orchestrator | 2026-03-28 00:56:14.770463 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-03-28 00:56:14.770469 | orchestrator | Saturday 28 March 2026 00:56:10 +0000 (0:00:00.955) 0:08:00.873 ******** 2026-03-28 00:56:14.770475 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:56:14.770481 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:56:14.770487 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:56:14.770493 | orchestrator | 2026-03-28 00:56:14.770500 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:56:14.770506 | orchestrator | testbed-node-0 : ok=127  changed=79  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-03-28 00:56:14.770512 | orchestrator | testbed-node-1 : ok=126  changed=79  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-03-28 00:56:14.770531 | orchestrator | testbed-node-2 : ok=126  changed=79  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-03-28 00:56:14.770538 | orchestrator | 2026-03-28 00:56:14.770545 | orchestrator | 2026-03-28 00:56:14.770551 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:56:14.770557 | orchestrator | Saturday 28 March 2026 00:56:11 +0000 (0:00:00.884) 0:08:01.758 ******** 2026-03-28 00:56:14.770563 | orchestrator | =============================================================================== 2026-03-28 00:56:14.770569 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 10.26s 2026-03-28 00:56:14.770575 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 9.93s 2026-03-28 00:56:14.770581 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.79s 2026-03-28 00:56:14.770591 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 9.06s 2026-03-28 00:56:14.770598 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 8.82s 2026-03-28 00:56:14.770608 | orchestrator | haproxy-config : Copying over ceph-rgw haproxy config ------------------- 7.09s 2026-03-28 00:56:14.770614 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.95s 2026-03-28 00:56:14.770620 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 6.76s 2026-03-28 00:56:14.770626 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 6.27s 2026-03-28 00:56:14.770632 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 6.23s 2026-03-28 00:56:14.770639 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 6.20s 2026-03-28 00:56:14.770645 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 6.14s 2026-03-28 00:56:14.770651 | orchestrator | sysctl : Setting sysctl values ------------------------------------------ 5.84s 2026-03-28 00:56:14.770657 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 5.81s 2026-03-28 00:56:14.770663 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 5.57s 2026-03-28 00:56:14.770669 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 5.54s 2026-03-28 00:56:14.770675 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 5.50s 2026-03-28 00:56:14.770681 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.71s 2026-03-28 00:56:14.770687 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 4.68s 2026-03-28 00:56:14.770693 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 4.68s 2026-03-28 00:56:17.803762 | orchestrator | 2026-03-28 00:56:17 | INFO  | Task e69e920c-198f-405e-b326-b9ac960ea778 is in state STARTED 2026-03-28 00:56:17.804324 | orchestrator | 2026-03-28 00:56:17 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:56:17.805920 | orchestrator | 2026-03-28 00:56:17 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:56:17.806091 | orchestrator | 2026-03-28 00:56:17 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:56:20.849360 | orchestrator | 2026-03-28 00:56:20 | INFO  | Task e69e920c-198f-405e-b326-b9ac960ea778 is in state STARTED 2026-03-28 00:56:20.849996 | orchestrator | 2026-03-28 00:56:20 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:56:20.850580 | orchestrator | 2026-03-28 00:56:20 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:56:20.850610 | orchestrator | 2026-03-28 00:56:20 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:56:23.903052 | orchestrator | 2026-03-28 00:56:23 | INFO  | Task e69e920c-198f-405e-b326-b9ac960ea778 is in state STARTED 2026-03-28 00:56:23.903152 | orchestrator | 2026-03-28 00:56:23 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:56:23.903189 | orchestrator | 2026-03-28 00:56:23 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:56:23.903201 | orchestrator | 2026-03-28 00:56:23 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:56:26.939732 | orchestrator | 2026-03-28 00:56:26 | INFO  | Task e69e920c-198f-405e-b326-b9ac960ea778 is in state STARTED 2026-03-28 00:56:26.943824 | orchestrator | 2026-03-28 00:56:26 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:56:26.946327 | orchestrator | 2026-03-28 00:56:26 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:56:26.946713 | orchestrator | 2026-03-28 00:56:26 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:56:29.995652 | orchestrator | 2026-03-28 00:56:29 | INFO  | Task e69e920c-198f-405e-b326-b9ac960ea778 is in state STARTED 2026-03-28 00:56:29.995907 | orchestrator | 2026-03-28 00:56:29 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:56:29.997567 | orchestrator | 2026-03-28 00:56:29 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:56:29.997916 | orchestrator | 2026-03-28 00:56:29 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:56:33.035084 | orchestrator | 2026-03-28 00:56:33 | INFO  | Task e69e920c-198f-405e-b326-b9ac960ea778 is in state STARTED 2026-03-28 00:56:33.035596 | orchestrator | 2026-03-28 00:56:33 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:56:33.036798 | orchestrator | 2026-03-28 00:56:33 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:56:33.036828 | orchestrator | 2026-03-28 00:56:33 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:56:36.087040 | orchestrator | 2026-03-28 00:56:36 | INFO  | Task e69e920c-198f-405e-b326-b9ac960ea778 is in state STARTED 2026-03-28 00:56:36.087912 | orchestrator | 2026-03-28 00:56:36 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:56:36.089685 | orchestrator | 2026-03-28 00:56:36 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:56:36.090080 | orchestrator | 2026-03-28 00:56:36 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:56:39.143849 | orchestrator | 2026-03-28 00:56:39 | INFO  | Task e69e920c-198f-405e-b326-b9ac960ea778 is in state STARTED 2026-03-28 00:56:39.143931 | orchestrator | 2026-03-28 00:56:39 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:56:39.144030 | orchestrator | 2026-03-28 00:56:39 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:56:39.144042 | orchestrator | 2026-03-28 00:56:39 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:56:42.178265 | orchestrator | 2026-03-28 00:56:42 | INFO  | Task e69e920c-198f-405e-b326-b9ac960ea778 is in state STARTED 2026-03-28 00:56:42.178654 | orchestrator | 2026-03-28 00:56:42 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:56:42.179465 | orchestrator | 2026-03-28 00:56:42 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:56:42.179570 | orchestrator | 2026-03-28 00:56:42 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:56:45.216698 | orchestrator | 2026-03-28 00:56:45 | INFO  | Task e69e920c-198f-405e-b326-b9ac960ea778 is in state STARTED 2026-03-28 00:56:45.218129 | orchestrator | 2026-03-28 00:56:45 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:56:45.221061 | orchestrator | 2026-03-28 00:56:45 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:56:45.221395 | orchestrator | 2026-03-28 00:56:45 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:56:48.252595 | orchestrator | 2026-03-28 00:56:48 | INFO  | Task e69e920c-198f-405e-b326-b9ac960ea778 is in state STARTED 2026-03-28 00:56:48.253213 | orchestrator | 2026-03-28 00:56:48 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:56:48.256216 | orchestrator | 2026-03-28 00:56:48 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:56:48.256264 | orchestrator | 2026-03-28 00:56:48 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:56:51.295992 | orchestrator | 2026-03-28 00:56:51 | INFO  | Task e69e920c-198f-405e-b326-b9ac960ea778 is in state STARTED 2026-03-28 00:56:51.296951 | orchestrator | 2026-03-28 00:56:51 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:56:51.297849 | orchestrator | 2026-03-28 00:56:51 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:56:51.297939 | orchestrator | 2026-03-28 00:56:51 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:56:54.325144 | orchestrator | 2026-03-28 00:56:54 | INFO  | Task e69e920c-198f-405e-b326-b9ac960ea778 is in state STARTED 2026-03-28 00:56:54.325578 | orchestrator | 2026-03-28 00:56:54 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:56:54.326338 | orchestrator | 2026-03-28 00:56:54 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:56:54.326357 | orchestrator | 2026-03-28 00:56:54 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:56:57.362139 | orchestrator | 2026-03-28 00:56:57 | INFO  | Task e69e920c-198f-405e-b326-b9ac960ea778 is in state STARTED 2026-03-28 00:56:57.366655 | orchestrator | 2026-03-28 00:56:57 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:56:57.367923 | orchestrator | 2026-03-28 00:56:57 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:56:57.368137 | orchestrator | 2026-03-28 00:56:57 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:57:00.416884 | orchestrator | 2026-03-28 00:57:00 | INFO  | Task e69e920c-198f-405e-b326-b9ac960ea778 is in state STARTED 2026-03-28 00:57:00.418393 | orchestrator | 2026-03-28 00:57:00 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:57:00.421096 | orchestrator | 2026-03-28 00:57:00 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state STARTED 2026-03-28 00:57:00.421385 | orchestrator | 2026-03-28 00:57:00 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:57:03.512352 | orchestrator | 2026-03-28 00:57:03 | INFO  | Task e69e920c-198f-405e-b326-b9ac960ea778 is in state STARTED 2026-03-28 00:57:03.512973 | orchestrator | 2026-03-28 00:57:03 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:57:03.517845 | orchestrator | 2026-03-28 00:57:03 | INFO  | Task 14509cd2-e77d-4c13-885e-b01b4a4cee04 is in state SUCCESS 2026-03-28 00:57:03.520084 | orchestrator | 2026-03-28 00:57:03.520132 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-28 00:57:03.520145 | orchestrator | 2.16.14 2026-03-28 00:57:03.520156 | orchestrator | 2026-03-28 00:57:03.520166 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-03-28 00:57:03.520177 | orchestrator | 2026-03-28 00:57:03.520187 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-28 00:57:03.520198 | orchestrator | Saturday 28 March 2026 00:45:10 +0000 (0:00:00.847) 0:00:00.847 ******** 2026-03-28 00:57:03.520209 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:03.520220 | orchestrator | 2026-03-28 00:57:03.520230 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-28 00:57:03.520240 | orchestrator | Saturday 28 March 2026 00:45:12 +0000 (0:00:01.603) 0:00:02.450 ******** 2026-03-28 00:57:03.520250 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.520386 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.520470 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.520481 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.520491 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.520501 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.520591 | orchestrator | 2026-03-28 00:57:03.520610 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-28 00:57:03.520627 | orchestrator | Saturday 28 March 2026 00:45:15 +0000 (0:00:02.546) 0:00:04.997 ******** 2026-03-28 00:57:03.520641 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.520655 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.520671 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.520688 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.520704 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.520721 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.520733 | orchestrator | 2026-03-28 00:57:03.520744 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-28 00:57:03.520756 | orchestrator | Saturday 28 March 2026 00:45:16 +0000 (0:00:00.923) 0:00:05.920 ******** 2026-03-28 00:57:03.520767 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.520778 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.520789 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.520800 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.520810 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.520821 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.520831 | orchestrator | 2026-03-28 00:57:03.520844 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-28 00:57:03.520854 | orchestrator | Saturday 28 March 2026 00:45:17 +0000 (0:00:01.053) 0:00:06.974 ******** 2026-03-28 00:57:03.520865 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.520876 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.520887 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.520899 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.520908 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.520953 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.520965 | orchestrator | 2026-03-28 00:57:03.520974 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-28 00:57:03.520984 | orchestrator | Saturday 28 March 2026 00:45:18 +0000 (0:00:01.266) 0:00:08.240 ******** 2026-03-28 00:57:03.520993 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.521017 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.521027 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.521036 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.521045 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.521055 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.521064 | orchestrator | 2026-03-28 00:57:03.521159 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-28 00:57:03.521170 | orchestrator | Saturday 28 March 2026 00:45:19 +0000 (0:00:01.473) 0:00:09.714 ******** 2026-03-28 00:57:03.521179 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.521189 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.521198 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.521207 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.521217 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.521226 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.521235 | orchestrator | 2026-03-28 00:57:03.521245 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-28 00:57:03.521255 | orchestrator | Saturday 28 March 2026 00:45:22 +0000 (0:00:02.218) 0:00:11.932 ******** 2026-03-28 00:57:03.521265 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.521275 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.521285 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.521294 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.521417 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.521428 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.521437 | orchestrator | 2026-03-28 00:57:03.521469 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-28 00:57:03.521480 | orchestrator | Saturday 28 March 2026 00:45:23 +0000 (0:00:01.274) 0:00:13.206 ******** 2026-03-28 00:57:03.521489 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.521499 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.521519 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.521529 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.521538 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.521547 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.521557 | orchestrator | 2026-03-28 00:57:03.521566 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-28 00:57:03.521575 | orchestrator | Saturday 28 March 2026 00:45:24 +0000 (0:00:01.288) 0:00:14.495 ******** 2026-03-28 00:57:03.521585 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 00:57:03.521594 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 00:57:03.521604 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 00:57:03.521613 | orchestrator | 2026-03-28 00:57:03.521622 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-28 00:57:03.521632 | orchestrator | Saturday 28 March 2026 00:45:25 +0000 (0:00:00.829) 0:00:15.325 ******** 2026-03-28 00:57:03.521642 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.521651 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.521661 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.521686 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.521696 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.521706 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.521715 | orchestrator | 2026-03-28 00:57:03.521724 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-28 00:57:03.521734 | orchestrator | Saturday 28 March 2026 00:45:27 +0000 (0:00:02.018) 0:00:17.343 ******** 2026-03-28 00:57:03.521744 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 00:57:03.521753 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 00:57:03.521763 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 00:57:03.521772 | orchestrator | 2026-03-28 00:57:03.521781 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-28 00:57:03.521791 | orchestrator | Saturday 28 March 2026 00:45:31 +0000 (0:00:03.695) 0:00:21.038 ******** 2026-03-28 00:57:03.521800 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-28 00:57:03.521810 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-28 00:57:03.521819 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-28 00:57:03.521828 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.521838 | orchestrator | 2026-03-28 00:57:03.521847 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-28 00:57:03.521857 | orchestrator | Saturday 28 March 2026 00:45:32 +0000 (0:00:01.030) 0:00:22.069 ******** 2026-03-28 00:57:03.521905 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.521920 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.521930 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.521940 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.521949 | orchestrator | 2026-03-28 00:57:03.521959 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-28 00:57:03.521969 | orchestrator | Saturday 28 March 2026 00:45:34 +0000 (0:00:02.135) 0:00:24.204 ******** 2026-03-28 00:57:03.521994 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.522007 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.522066 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.522077 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.522088 | orchestrator | 2026-03-28 00:57:03.522098 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-28 00:57:03.522107 | orchestrator | Saturday 28 March 2026 00:45:34 +0000 (0:00:00.380) 0:00:24.585 ******** 2026-03-28 00:57:03.522128 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-28 00:45:28.573618', 'end': '2026-03-28 00:45:28.687006', 'delta': '0:00:00.113388', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.522145 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-28 00:45:29.514797', 'end': '2026-03-28 00:45:29.632724', 'delta': '0:00:00.117927', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.522155 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-28 00:45:30.829427', 'end': '2026-03-28 00:45:30.951232', 'delta': '0:00:00.121805', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.522165 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.522175 | orchestrator | 2026-03-28 00:57:03.522184 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-28 00:57:03.522202 | orchestrator | Saturday 28 March 2026 00:45:35 +0000 (0:00:00.907) 0:00:25.493 ******** 2026-03-28 00:57:03.522211 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.522221 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.522353 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.522363 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.522373 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.522382 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.522392 | orchestrator | 2026-03-28 00:57:03.522401 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-28 00:57:03.522416 | orchestrator | Saturday 28 March 2026 00:45:38 +0000 (0:00:03.194) 0:00:28.687 ******** 2026-03-28 00:57:03.522426 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-28 00:57:03.522436 | orchestrator | 2026-03-28 00:57:03.522493 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-28 00:57:03.522504 | orchestrator | Saturday 28 March 2026 00:45:39 +0000 (0:00:00.887) 0:00:29.574 ******** 2026-03-28 00:57:03.522514 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.522524 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.522533 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.522543 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.522553 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.522562 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.522571 | orchestrator | 2026-03-28 00:57:03.522581 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-28 00:57:03.522590 | orchestrator | Saturday 28 March 2026 00:45:42 +0000 (0:00:03.252) 0:00:32.827 ******** 2026-03-28 00:57:03.522600 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.522609 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.522619 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.522628 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.522637 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.522646 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.522656 | orchestrator | 2026-03-28 00:57:03.522665 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-28 00:57:03.522675 | orchestrator | Saturday 28 March 2026 00:45:44 +0000 (0:00:01.648) 0:00:34.476 ******** 2026-03-28 00:57:03.522684 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.522694 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.522703 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.522712 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.522722 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.522731 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.522740 | orchestrator | 2026-03-28 00:57:03.522750 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-28 00:57:03.522759 | orchestrator | Saturday 28 March 2026 00:45:46 +0000 (0:00:02.025) 0:00:36.502 ******** 2026-03-28 00:57:03.522769 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.522779 | orchestrator | 2026-03-28 00:57:03.522788 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-28 00:57:03.522798 | orchestrator | Saturday 28 March 2026 00:45:46 +0000 (0:00:00.172) 0:00:36.674 ******** 2026-03-28 00:57:03.522807 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.522817 | orchestrator | 2026-03-28 00:57:03.522826 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-28 00:57:03.522836 | orchestrator | Saturday 28 March 2026 00:45:47 +0000 (0:00:00.256) 0:00:36.931 ******** 2026-03-28 00:57:03.522845 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.522900 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.522910 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.522938 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.522948 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.522958 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.522975 | orchestrator | 2026-03-28 00:57:03.522984 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-28 00:57:03.522994 | orchestrator | Saturday 28 March 2026 00:45:48 +0000 (0:00:01.806) 0:00:38.738 ******** 2026-03-28 00:57:03.523003 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.523013 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.523133 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.523145 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.523155 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.523164 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.523174 | orchestrator | 2026-03-28 00:57:03.523184 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-28 00:57:03.523193 | orchestrator | Saturday 28 March 2026 00:45:51 +0000 (0:00:02.280) 0:00:41.018 ******** 2026-03-28 00:57:03.523203 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.523212 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.523222 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.523231 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.523241 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.523250 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.523260 | orchestrator | 2026-03-28 00:57:03.523269 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-28 00:57:03.523279 | orchestrator | Saturday 28 March 2026 00:45:52 +0000 (0:00:01.424) 0:00:42.442 ******** 2026-03-28 00:57:03.523289 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.523298 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.523308 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.523317 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.523327 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.523336 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.523346 | orchestrator | 2026-03-28 00:57:03.523355 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-28 00:57:03.523365 | orchestrator | Saturday 28 March 2026 00:45:53 +0000 (0:00:01.102) 0:00:43.545 ******** 2026-03-28 00:57:03.523374 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.523384 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.523393 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.523403 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.523412 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.523422 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.523431 | orchestrator | 2026-03-28 00:57:03.523441 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-28 00:57:03.523500 | orchestrator | Saturday 28 March 2026 00:45:54 +0000 (0:00:00.899) 0:00:44.445 ******** 2026-03-28 00:57:03.523510 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.523520 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.523530 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.523539 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.523549 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.523558 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.523568 | orchestrator | 2026-03-28 00:57:03.523578 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-28 00:57:03.523594 | orchestrator | Saturday 28 March 2026 00:45:56 +0000 (0:00:01.833) 0:00:46.278 ******** 2026-03-28 00:57:03.523604 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.523614 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.523624 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.523633 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.523642 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.523652 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.523661 | orchestrator | 2026-03-28 00:57:03.523671 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-28 00:57:03.523688 | orchestrator | Saturday 28 March 2026 00:45:57 +0000 (0:00:01.223) 0:00:47.502 ******** 2026-03-28 00:57:03.523700 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3eb28a65--49e9--527a--93b6--39f945444b2a-osd--block--3eb28a65--49e9--527a--93b6--39f945444b2a', 'dm-uuid-LVM-Tchacmkbltv1g8Xx5nMCBdnIbnCImJsIPRMECP12a16eAHm6yNrvtAvfv1MxnEUQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-28 00:57:03.523712 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8c246942--827f--54a7--8a08--735105fd2fd0-osd--block--8c246942--827f--54a7--8a08--735105fd2fd0', 'dm-uuid-LVM-HWw01x01ciwNdkzf1FFw2E1N5qxqftc4GlclHBtP5diew3a2C5nmBsr7tBLGXVnd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-28 00:57:03.523731 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:57:03.523743 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:57:03.523753 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:57:03.523763 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:57:03.523773 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:57:03.523788 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:57:03.523806 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:57:03.523816 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:57:03.523838 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c51dbd4-3dd9-4220-b480-983204e78537', 'scsi-SQEMU_QEMU_HARDDISK_3c51dbd4-3dd9-4220-b480-983204e78537'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c51dbd4-3dd9-4220-b480-983204e78537-part1', 'scsi-SQEMU_QEMU_HARDDISK_3c51dbd4-3dd9-4220-b480-983204e78537-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c51dbd4-3dd9-4220-b480-983204e78537-part14', 'scsi-SQEMU_QEMU_HARDDISK_3c51dbd4-3dd9-4220-b480-983204e78537-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c51dbd4-3dd9-4220-b480-983204e78537-part15', 'scsi-SQEMU_QEMU_HARDDISK_3c51dbd4-3dd9-4220-b480-983204e78537-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c51dbd4-3dd9-4220-b480-983204e78537-part16', 'scsi-SQEMU_QEMU_HARDDISK_3c51dbd4-3dd9-4220-b480-983204e78537-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 00:57:03.523852 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--3eb28a65--49e9--527a--93b6--39f945444b2a-osd--block--3eb28a65--49e9--527a--93b6--39f945444b2a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Jn6V62-taHj-7NNl-DW6r-rQuJ-XtFr-BtDt29', 'scsi-0QEMU_QEMU_HARDDISK_78ac07d6-a998-431a-8632-f54c89645a8d', 'scsi-SQEMU_QEMU_HARDDISK_78ac07d6-a998-431a-8632-f54c89645a8d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 00:57:03.523869 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--8c246942--827f--54a7--8a08--735105fd2fd0-osd--block--8c246942--827f--54a7--8a08--735105fd2fd0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GIfRsv-INbZ-xxrK-fLUV-EInY-JKfg-cHLsY4', 'scsi-0QEMU_QEMU_HARDDISK_af575ecf-0cf6-48aa-a1b6-43f16240ccad', 'scsi-SQEMU_QEMU_HARDDISK_af575ecf-0cf6-48aa-a1b6-43f16240ccad'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 00:57:03.523888 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d2c41d1e-c1aa-422a-bc56-ab0bbd118726', 'scsi-SQEMU_QEMU_HARDDISK_d2c41d1e-c1aa-422a-bc56-ab0bbd118726'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 00:57:03.523899 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-00-03-24-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 00:57:03.523917 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--95774a3e--10f2--5c5c--866d--eaa2f6123896-osd--block--95774a3e--10f2--5c5c--866d--eaa2f6123896', 'dm-uuid-LVM-on9bNmqQdl6bqf2swm2eFjEqLh4NH46Ev4my3a3dstUeUyyjSITM8iDZj3AEZbI7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-28 00:57:03.523928 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6126976c--050b--5515--8c81--fb3ee245975b-osd--block--6126976c--050b--5515--8c81--fb3ee245975b', 'dm-uuid-LVM-XwRfxGsnuoG51EkZS9WI1B6veK02hkwXdHKcvQh9ZJAmvIlWj4yHrj2qiTTQd77U'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-28 00:57:03.523938 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:57:03.523948 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:57:03.523969 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:57:03.523985 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:57:03.523995 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:57:03.524005 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:57:03.524014 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:57:03.524024 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.524053 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:57:03.524069 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb8cdf5a-61ca-4829-8f5a-ada391b02d40', 'scsi-SQEMU_QEMU_HARDDISK_eb8cdf5a-61ca-4829-8f5a-ada391b02d40'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb8cdf5a-61ca-4829-8f5a-ada391b02d40-part1', 'scsi-SQEMU_QEMU_HARDDISK_eb8cdf5a-61ca-4829-8f5a-ada391b02d40-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb8cdf5a-61ca-4829-8f5a-ada391b02d40-part14', 'scsi-SQEMU_QEMU_HARDDISK_eb8cdf5a-61ca-4829-8f5a-ada391b02d40-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb8cdf5a-61ca-4829-8f5a-ada391b02d40-part15', 'scsi-SQEMU_QEMU_HARDDISK_eb8cdf5a-61ca-4829-8f5a-ada391b02d40-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb8cdf5a-61ca-4829-8f5a-ada391b02d40-part16', 'scsi-SQEMU_QEMU_HARDDISK_eb8cdf5a-61ca-4829-8f5a-ada391b02d40-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 00:57:03.524087 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--95774a3e--10f2--5c5c--866d--eaa2f6123896-osd--block--95774a3e--10f2--5c5c--866d--eaa2f6123896'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-XX3svd-zCjt-Tult-1O5W-sL6T-2xD5-SPEEy7', 'scsi-0QEMU_QEMU_HARDDISK_0a0aea56-4050-4691-823a-d862fa48a59f', 'scsi-SQEMU_QEMU_HARDDISK_0a0aea56-4050-4691-823a-d862fa48a59f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 00:57:03.524098 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--6126976c--050b--5515--8c81--fb3ee245975b-osd--block--6126976c--050b--5515--8c81--fb3ee245975b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-oy3YlD-QptT-9TfB-PYTV-Y3aA-qi23-moCusu', 'scsi-0QEMU_QEMU_HARDDISK_c165f4e4-c145-4cd5-8a4b-fe75c460abfb', 'scsi-SQEMU_QEMU_HARDDISK_c165f4e4-c145-4cd5-8a4b-fe75c460abfb'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 00:57:03.524115 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_edfefcfb-f0d2-43d0-b5b0-353b223cd811', 'scsi-SQEMU_QEMU_HARDDISK_edfefcfb-f0d2-43d0-b5b0-353b223cd811'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 00:57:03.524125 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a9825c53--ea63--5cae--a5f7--e494f125bb8e-osd--block--a9825c53--ea63--5cae--a5f7--e494f125bb8e', 'dm-uuid-LVM-B4pMeiTrBM8rvX1vahFbOPL3qjpt1Q32fUdZkecXTUtglIbr9PLn8TGSmGxI4RpJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-28 00:57:03.524135 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-00-03-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 00:57:03.524152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:57:03.524167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:57:03.524178 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8fa92e37--9e8f--5bc1--86de--5e52e5346f3d-osd--block--8fa92e37--9e8f--5bc1--86de--5e52e5346f3d', 'dm-uuid-LVM-fa5e9cMh8YJv5YMVwd7Z0lDYFGaAUWE21iI9X68E0kjP8CuUyiEfHNG6pf8mWjS1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-28 00:57:03.524188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:57:03.524198 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:57:03.524214 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:57:03.524224 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:57:03.524234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:57:03.524244 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:57:03.524259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:57:03.524274 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:57:03.524284 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.524294 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:57:03.524304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:57:03.524314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:57:03.524329 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:57:03.524363 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:57:03.524378 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9304b03c-54d0-4df2-b114-2d3d3345c945', 'scsi-SQEMU_QEMU_HARDDISK_9304b03c-54d0-4df2-b114-2d3d3345c945'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9304b03c-54d0-4df2-b114-2d3d3345c945-part1', 'scsi-SQEMU_QEMU_HARDDISK_9304b03c-54d0-4df2-b114-2d3d3345c945-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9304b03c-54d0-4df2-b114-2d3d3345c945-part14', 'scsi-SQEMU_QEMU_HARDDISK_9304b03c-54d0-4df2-b114-2d3d3345c945-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9304b03c-54d0-4df2-b114-2d3d3345c945-part15', 'scsi-SQEMU_QEMU_HARDDISK_9304b03c-54d0-4df2-b114-2d3d3345c945-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9304b03c-54d0-4df2-b114-2d3d3345c945-part16', 'scsi-SQEMU_QEMU_HARDDISK_9304b03c-54d0-4df2-b114-2d3d3345c945-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 00:57:03.524396 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--a9825c53--ea63--5cae--a5f7--e494f125bb8e-osd--block--a9825c53--ea63--5cae--a5f7--e494f125bb8e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fQOx62-yoeg-BbRB-W0wg-1u6h-7as6-VoKrFG', 'scsi-0QEMU_QEMU_HARDDISK_616f32f6-becb-4ce1-b615-c2a0fbaca869', 'scsi-SQEMU_QEMU_HARDDISK_616f32f6-becb-4ce1-b615-c2a0fbaca869'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 00:57:03.524412 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--8fa92e37--9e8f--5bc1--86de--5e52e5346f3d-osd--block--8fa92e37--9e8f--5bc1--86de--5e52e5346f3d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-92lr24-Adml-wnIe-TNqU-A4d1-LbSX-xdGC5x', 'scsi-0QEMU_QEMU_HARDDISK_479351df-b417-42ac-b9cb-d6683c731815', 'scsi-SQEMU_QEMU_HARDDISK_479351df-b417-42ac-b9cb-d6683c731815'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 00:57:03.524423 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3670b387-e30b-4544-bca5-74e83387707d', 'scsi-SQEMU_QEMU_HARDDISK_3670b387-e30b-4544-bca5-74e83387707d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 00:57:03.524433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:57:03.524471 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-00-03-06-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 00:57:03.524488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_88a2f0c9-b73b-426a-b81f-312e09d7fc82', 'scsi-SQEMU_QEMU_HARDDISK_88a2f0c9-b73b-426a-b81f-312e09d7fc82'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_88a2f0c9-b73b-426a-b81f-312e09d7fc82-part1', 'scsi-SQEMU_QEMU_HARDDISK_88a2f0c9-b73b-426a-b81f-312e09d7fc82-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_88a2f0c9-b73b-426a-b81f-312e09d7fc82-part14', 'scsi-SQEMU_QEMU_HARDDISK_88a2f0c9-b73b-426a-b81f-312e09d7fc82-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_88a2f0c9-b73b-426a-b81f-312e09d7fc82-part15', 'scsi-SQEMU_QEMU_HARDDISK_88a2f0c9-b73b-426a-b81f-312e09d7fc82-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_88a2f0c9-b73b-426a-b81f-312e09d7fc82-part16', 'scsi-SQEMU_QEMU_HARDDISK_88a2f0c9-b73b-426a-b81f-312e09d7fc82-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 00:57:03.524506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-00-03-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 00:57:03.524517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:57:03.524527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:57:03.524543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:57:03.524553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:57:03.524568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:57:03.524578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:57:03.524588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:57:03.524598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:57:03.524616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7e5dcb9-1092-43de-8534-38467587340e', 'scsi-SQEMU_QEMU_HARDDISK_c7e5dcb9-1092-43de-8534-38467587340e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7e5dcb9-1092-43de-8534-38467587340e-part1', 'scsi-SQEMU_QEMU_HARDDISK_c7e5dcb9-1092-43de-8534-38467587340e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7e5dcb9-1092-43de-8534-38467587340e-part14', 'scsi-SQEMU_QEMU_HARDDISK_c7e5dcb9-1092-43de-8534-38467587340e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7e5dcb9-1092-43de-8534-38467587340e-part15', 'scsi-SQEMU_QEMU_HARDDISK_c7e5dcb9-1092-43de-8534-38467587340e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7e5dcb9-1092-43de-8534-38467587340e-part16', 'scsi-SQEMU_QEMU_HARDDISK_c7e5dcb9-1092-43de-8534-38467587340e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 00:57:03.524641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-00-03-23-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 00:57:03.524651 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.524661 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.524671 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.524681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:57:03.524690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:57:03.524700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:57:03.524710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:57:03.524726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:57:03.524746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:57:03.524756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:57:03.524766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:57:03.524781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ea0833-972a-4340-8844-482a94e1775f', 'scsi-SQEMU_QEMU_HARDDISK_11ea0833-972a-4340-8844-482a94e1775f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ea0833-972a-4340-8844-482a94e1775f-part1', 'scsi-SQEMU_QEMU_HARDDISK_11ea0833-972a-4340-8844-482a94e1775f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ea0833-972a-4340-8844-482a94e1775f-part14', 'scsi-SQEMU_QEMU_HARDDISK_11ea0833-972a-4340-8844-482a94e1775f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ea0833-972a-4340-8844-482a94e1775f-part15', 'scsi-SQEMU_QEMU_HARDDISK_11ea0833-972a-4340-8844-482a94e1775f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ea0833-972a-4340-8844-482a94e1775f-part16', 'scsi-SQEMU_QEMU_HARDDISK_11ea0833-972a-4340-8844-482a94e1775f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 00:57:03.524799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-00-03-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 00:57:03.524816 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.524825 | orchestrator | 2026-03-28 00:57:03.524835 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-28 00:57:03.524845 | orchestrator | Saturday 28 March 2026 00:46:00 +0000 (0:00:03.242) 0:00:50.744 ******** 2026-03-28 00:57:03.524855 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3eb28a65--49e9--527a--93b6--39f945444b2a-osd--block--3eb28a65--49e9--527a--93b6--39f945444b2a', 'dm-uuid-LVM-Tchacmkbltv1g8Xx5nMCBdnIbnCImJsIPRMECP12a16eAHm6yNrvtAvfv1MxnEUQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.524866 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8c246942--827f--54a7--8a08--735105fd2fd0-osd--block--8c246942--827f--54a7--8a08--735105fd2fd0', 'dm-uuid-LVM-HWw01x01ciwNdkzf1FFw2E1N5qxqftc4GlclHBtP5diew3a2C5nmBsr7tBLGXVnd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.524881 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.524892 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.524902 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.524917 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.524937 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.524947 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.524961 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.524971 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.524990 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c51dbd4-3dd9-4220-b480-983204e78537', 'scsi-SQEMU_QEMU_HARDDISK_3c51dbd4-3dd9-4220-b480-983204e78537'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c51dbd4-3dd9-4220-b480-983204e78537-part1', 'scsi-SQEMU_QEMU_HARDDISK_3c51dbd4-3dd9-4220-b480-983204e78537-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c51dbd4-3dd9-4220-b480-983204e78537-part14', 'scsi-SQEMU_QEMU_HARDDISK_3c51dbd4-3dd9-4220-b480-983204e78537-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c51dbd4-3dd9-4220-b480-983204e78537-part15', 'scsi-SQEMU_QEMU_HARDDISK_3c51dbd4-3dd9-4220-b480-983204e78537-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c51dbd4-3dd9-4220-b480-983204e78537-part16', 'scsi-SQEMU_QEMU_HARDDISK_3c51dbd4-3dd9-4220-b480-983204e78537-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.525008 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--3eb28a65--49e9--527a--93b6--39f945444b2a-osd--block--3eb28a65--49e9--527a--93b6--39f945444b2a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Jn6V62-taHj-7NNl-DW6r-rQuJ-XtFr-BtDt29', 'scsi-0QEMU_QEMU_HARDDISK_78ac07d6-a998-431a-8632-f54c89645a8d', 'scsi-SQEMU_QEMU_HARDDISK_78ac07d6-a998-431a-8632-f54c89645a8d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.525024 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--8c246942--827f--54a7--8a08--735105fd2fd0-osd--block--8c246942--827f--54a7--8a08--735105fd2fd0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GIfRsv-INbZ-xxrK-fLUV-EInY-JKfg-cHLsY4', 'scsi-0QEMU_QEMU_HARDDISK_af575ecf-0cf6-48aa-a1b6-43f16240ccad', 'scsi-SQEMU_QEMU_HARDDISK_af575ecf-0cf6-48aa-a1b6-43f16240ccad'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.525034 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--95774a3e--10f2--5c5c--866d--eaa2f6123896-osd--block--95774a3e--10f2--5c5c--866d--eaa2f6123896', 'dm-uuid-LVM-on9bNmqQdl6bqf2swm2eFjEqLh4NH46Ev4my3a3dstUeUyyjSITM8iDZj3AEZbI7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.525052 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d2c41d1e-c1aa-422a-bc56-ab0bbd118726', 'scsi-SQEMU_QEMU_HARDDISK_d2c41d1e-c1aa-422a-bc56-ab0bbd118726'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.525068 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-00-03-24-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.525078 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6126976c--050b--5515--8c81--fb3ee245975b-osd--block--6126976c--050b--5515--8c81--fb3ee245975b', 'dm-uuid-LVM-XwRfxGsnuoG51EkZS9WI1B6veK02hkwXdHKcvQh9ZJAmvIlWj4yHrj2qiTTQd77U'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.525092 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.525103 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.525112 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a9825c53--ea63--5cae--a5f7--e494f125bb8e-osd--block--a9825c53--ea63--5cae--a5f7--e494f125bb8e', 'dm-uuid-LVM-B4pMeiTrBM8rvX1vahFbOPL3qjpt1Q32fUdZkecXTUtglIbr9PLn8TGSmGxI4RpJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.525135 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was Fa2026-03-28 00:57:03 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:57:03.525146 | orchestrator | lse', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.525156 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8fa92e37--9e8f--5bc1--86de--5e52e5346f3d-osd--block--8fa92e37--9e8f--5bc1--86de--5e52e5346f3d', 'dm-uuid-LVM-fa5e9cMh8YJv5YMVwd7Z0lDYFGaAUWE21iI9X68E0kjP8CuUyiEfHNG6pf8mWjS1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.525166 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.525180 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.525190 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.525200 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.525222 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.525233 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.525242 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.525256 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.525274 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb8cdf5a-61ca-4829-8f5a-ada391b02d40', 'scsi-SQEMU_QEMU_HARDDISK_eb8cdf5a-61ca-4829-8f5a-ada391b02d40'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb8cdf5a-61ca-4829-8f5a-ada391b02d40-part1', 'scsi-SQEMU_QEMU_HARDDISK_eb8cdf5a-61ca-4829-8f5a-ada391b02d40-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb8cdf5a-61ca-4829-8f5a-ada391b02d40-part14', 'scsi-SQEMU_QEMU_HARDDISK_eb8cdf5a-61ca-4829-8f5a-ada391b02d40-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb8cdf5a-61ca-4829-8f5a-ada391b02d40-part15', 'scsi-SQEMU_QEMU_HARDDISK_eb8cdf5a-61ca-4829-8f5a-ada391b02d40-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb8cdf5a-61ca-4829-8f5a-ada391b02d40-part16', 'scsi-SQEMU_QEMU_HARDDISK_eb8cdf5a-61ca-4829-8f5a-ada391b02d40-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.525294 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.525305 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--95774a3e--10f2--5c5c--866d--eaa2f6123896-osd--block--95774a3e--10f2--5c5c--866d--eaa2f6123896'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-XX3svd-zCjt-Tult-1O5W-sL6T-2xD5-SPEEy7', 'scsi-0QEMU_QEMU_HARDDISK_0a0aea56-4050-4691-823a-d862fa48a59f', 'scsi-SQEMU_QEMU_HARDDISK_0a0aea56-4050-4691-823a-d862fa48a59f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.525320 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--6126976c--050b--5515--8c81--fb3ee245975b-osd--block--6126976c--050b--5515--8c81--fb3ee245975b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-oy3YlD-QptT-9TfB-PYTV-Y3aA-qi23-moCusu', 'scsi-0QEMU_QEMU_HARDDISK_c165f4e4-c145-4cd5-8a4b-fe75c460abfb', 'scsi-SQEMU_QEMU_HARDDISK_c165f4e4-c145-4cd5-8a4b-fe75c460abfb'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.525330 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.525351 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_edfefcfb-f0d2-43d0-b5b0-353b223cd811', 'scsi-SQEMU_QEMU_HARDDISK_edfefcfb-f0d2-43d0-b5b0-353b223cd811'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.525361 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-00-03-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.525372 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.525586 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.525608 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.525619 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9304b03c-54d0-4df2-b114-2d3d3345c945', 'scsi-SQEMU_QEMU_HARDDISK_9304b03c-54d0-4df2-b114-2d3d3345c945'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9304b03c-54d0-4df2-b114-2d3d3345c945-part1', 'scsi-SQEMU_QEMU_HARDDISK_9304b03c-54d0-4df2-b114-2d3d3345c945-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9304b03c-54d0-4df2-b114-2d3d3345c945-part14', 'scsi-SQEMU_QEMU_HARDDISK_9304b03c-54d0-4df2-b114-2d3d3345c945-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9304b03c-54d0-4df2-b114-2d3d3345c945-part15', 'scsi-SQEMU_QEMU_HARDDISK_9304b03c-54d0-4df2-b114-2d3d3345c945-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9304b03c-54d0-4df2-b114-2d3d3345c945-part16', 'scsi-SQEMU_QEMU_HARDDISK_9304b03c-54d0-4df2-b114-2d3d3345c945-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.525639 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.525657 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--a9825c53--ea63--5cae--a5f7--e494f125bb8e-osd--block--a9825c53--ea63--5cae--a5f7--e494f125bb8e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fQOx62-yoeg-BbRB-W0wg-1u6h-7as6-VoKrFG', 'scsi-0QEMU_QEMU_HARDDISK_616f32f6-becb-4ce1-b615-c2a0fbaca869', 'scsi-SQEMU_QEMU_HARDDISK_616f32f6-becb-4ce1-b615-c2a0fbaca869'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.525674 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--8fa92e37--9e8f--5bc1--86de--5e52e5346f3d-osd--block--8fa92e37--9e8f--5bc1--86de--5e52e5346f3d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-92lr24-Adml-wnIe-TNqU-A4d1-LbSX-xdGC5x', 'scsi-0QEMU_QEMU_HARDDISK_479351df-b417-42ac-b9cb-d6683c731815', 'scsi-SQEMU_QEMU_HARDDISK_479351df-b417-42ac-b9cb-d6683c731815'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.525692 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3670b387-e30b-4544-bca5-74e83387707d', 'scsi-SQEMU_QEMU_HARDDISK_3670b387-e30b-4544-bca5-74e83387707d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.525703 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-00-03-06-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.525713 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.525725 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.525745 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.525754 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.525768 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.525776 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.525785 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.525793 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.525813 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_88a2f0c9-b73b-426a-b81f-312e09d7fc82', 'scsi-SQEMU_QEMU_HARDDISK_88a2f0c9-b73b-426a-b81f-312e09d7fc82'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_88a2f0c9-b73b-426a-b81f-312e09d7fc82-part1', 'scsi-SQEMU_QEMU_HARDDISK_88a2f0c9-b73b-426a-b81f-312e09d7fc82-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_88a2f0c9-b73b-426a-b81f-312e09d7fc82-part14', 'scsi-SQEMU_QEMU_HARDDISK_88a2f0c9-b73b-426a-b81f-312e09d7fc82-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_88a2f0c9-b73b-426a-b81f-312e09d7fc82-part15', 'scsi-SQEMU_QEMU_HARDDISK_88a2f0c9-b73b-426a-b81f-312e09d7fc82-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_88a2f0c9-b73b-426a-b81f-312e09d7fc82-part16', 'scsi-SQEMU_QEMU_HARDDISK_88a2f0c9-b73b-426a-b81f-312e09d7fc82-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.525828 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-00-03-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.525837 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.525846 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.525854 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.525868 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.525881 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.525895 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.525903 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.525912 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.525920 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.525939 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7e5dcb9-1092-43de-8534-38467587340e', 'scsi-SQEMU_QEMU_HARDDISK_c7e5dcb9-1092-43de-8534-38467587340e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7e5dcb9-1092-43de-8534-38467587340e-part1', 'scsi-SQEMU_QEMU_HARDDISK_c7e5dcb9-1092-43de-8534-38467587340e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7e5dcb9-1092-43de-8534-38467587340e-part14', 'scsi-SQEMU_QEMU_HARDDISK_c7e5dcb9-1092-43de-8534-38467587340e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7e5dcb9-1092-43de-8534-38467587340e-part15', 'scsi-SQEMU_QEMU_HARDDISK_c7e5dcb9-1092-43de-8534-38467587340e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7e5dcb9-1092-43de-8534-38467587340e-part16', 'scsi-SQEMU_QEMU_HARDDISK_c7e5dcb9-1092-43de-8534-38467587340e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.525954 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.525963 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-00-03-23-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.525972 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.525980 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.525988 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.525997 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.526010 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.526060 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.526070 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.526078 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.526086 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.526094 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.526113 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ea0833-972a-4340-8844-482a94e1775f', 'scsi-SQEMU_QEMU_HARDDISK_11ea0833-972a-4340-8844-482a94e1775f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ea0833-972a-4340-8844-482a94e1775f-part1', 'scsi-SQEMU_QEMU_HARDDISK_11ea0833-972a-4340-8844-482a94e1775f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ea0833-972a-4340-8844-482a94e1775f-part14', 'scsi-SQEMU_QEMU_HARDDISK_11ea0833-972a-4340-8844-482a94e1775f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ea0833-972a-4340-8844-482a94e1775f-part15', 'scsi-SQEMU_QEMU_HARDDISK_11ea0833-972a-4340-8844-482a94e1775f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11ea0833-972a-4340-8844-482a94e1775f-part16', 'scsi-SQEMU_QEMU_HARDDISK_11ea0833-972a-4340-8844-482a94e1775f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.526128 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-00-03-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:57:03.526136 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.526144 | orchestrator | 2026-03-28 00:57:03.526153 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-28 00:57:03.526161 | orchestrator | Saturday 28 March 2026 00:46:05 +0000 (0:00:04.170) 0:00:54.914 ******** 2026-03-28 00:57:03.526171 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.526180 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.526190 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.526199 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.526208 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.526217 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.526226 | orchestrator | 2026-03-28 00:57:03.526235 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-28 00:57:03.526245 | orchestrator | Saturday 28 March 2026 00:46:06 +0000 (0:00:01.564) 0:00:56.479 ******** 2026-03-28 00:57:03.526254 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.526263 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.526272 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.526281 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.526290 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.526299 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.526308 | orchestrator | 2026-03-28 00:57:03.526317 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-28 00:57:03.526327 | orchestrator | Saturday 28 March 2026 00:46:07 +0000 (0:00:01.047) 0:00:57.527 ******** 2026-03-28 00:57:03.526336 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.526345 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.526353 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.526363 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.526377 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.526386 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.526394 | orchestrator | 2026-03-28 00:57:03.526404 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-28 00:57:03.526413 | orchestrator | Saturday 28 March 2026 00:46:09 +0000 (0:00:02.200) 0:00:59.727 ******** 2026-03-28 00:57:03.526422 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.526431 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.526440 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.526463 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.526472 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.526482 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.526490 | orchestrator | 2026-03-28 00:57:03.526500 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-28 00:57:03.526514 | orchestrator | Saturday 28 March 2026 00:46:11 +0000 (0:00:01.273) 0:01:01.001 ******** 2026-03-28 00:57:03.526523 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.526532 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.526540 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.526548 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.526555 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.526563 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.526571 | orchestrator | 2026-03-28 00:57:03.526579 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-28 00:57:03.526587 | orchestrator | Saturday 28 March 2026 00:46:12 +0000 (0:00:01.072) 0:01:02.073 ******** 2026-03-28 00:57:03.526594 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.526606 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.526614 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.526622 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.526630 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.526638 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.526645 | orchestrator | 2026-03-28 00:57:03.526653 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-28 00:57:03.526661 | orchestrator | Saturday 28 March 2026 00:46:13 +0000 (0:00:00.833) 0:01:02.907 ******** 2026-03-28 00:57:03.526670 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-28 00:57:03.526677 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-28 00:57:03.526685 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-28 00:57:03.526693 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-28 00:57:03.526701 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-28 00:57:03.526708 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-28 00:57:03.526716 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-28 00:57:03.526724 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-28 00:57:03.526731 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-28 00:57:03.526739 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-03-28 00:57:03.526747 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-28 00:57:03.526754 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-28 00:57:03.526762 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-28 00:57:03.526770 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-28 00:57:03.526778 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-03-28 00:57:03.526785 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-03-28 00:57:03.526793 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-03-28 00:57:03.526801 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-28 00:57:03.526808 | orchestrator | 2026-03-28 00:57:03.526816 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-28 00:57:03.526824 | orchestrator | Saturday 28 March 2026 00:46:17 +0000 (0:00:04.508) 0:01:07.416 ******** 2026-03-28 00:57:03.526838 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-28 00:57:03.526846 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-28 00:57:03.526854 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-28 00:57:03.526862 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.526869 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-28 00:57:03.526877 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-28 00:57:03.526885 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-28 00:57:03.526892 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.526901 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-28 00:57:03.526908 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-28 00:57:03.526916 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-28 00:57:03.526924 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.526932 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-28 00:57:03.526940 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-28 00:57:03.526947 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-28 00:57:03.526955 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.526962 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-28 00:57:03.526970 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-28 00:57:03.526978 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-28 00:57:03.526986 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.526994 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-28 00:57:03.527001 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-28 00:57:03.527009 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-28 00:57:03.527017 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.527024 | orchestrator | 2026-03-28 00:57:03.527032 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-28 00:57:03.527040 | orchestrator | Saturday 28 March 2026 00:46:18 +0000 (0:00:00.776) 0:01:08.192 ******** 2026-03-28 00:57:03.527048 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.527056 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.527063 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.527072 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:57:03.527080 | orchestrator | 2026-03-28 00:57:03.527088 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-28 00:57:03.527096 | orchestrator | Saturday 28 March 2026 00:46:19 +0000 (0:00:01.448) 0:01:09.641 ******** 2026-03-28 00:57:03.527104 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.527112 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.527123 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.527132 | orchestrator | 2026-03-28 00:57:03.527140 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-28 00:57:03.527148 | orchestrator | Saturday 28 March 2026 00:46:20 +0000 (0:00:00.532) 0:01:10.173 ******** 2026-03-28 00:57:03.527155 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.527163 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.527171 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.527179 | orchestrator | 2026-03-28 00:57:03.527187 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-28 00:57:03.527194 | orchestrator | Saturday 28 March 2026 00:46:20 +0000 (0:00:00.382) 0:01:10.556 ******** 2026-03-28 00:57:03.527207 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.527215 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.527228 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.527235 | orchestrator | 2026-03-28 00:57:03.527243 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-28 00:57:03.527251 | orchestrator | Saturday 28 March 2026 00:46:21 +0000 (0:00:00.400) 0:01:10.956 ******** 2026-03-28 00:57:03.527259 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.527267 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.527275 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.527282 | orchestrator | 2026-03-28 00:57:03.527290 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-28 00:57:03.527298 | orchestrator | Saturday 28 March 2026 00:46:21 +0000 (0:00:00.874) 0:01:11.831 ******** 2026-03-28 00:57:03.527306 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 00:57:03.527314 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 00:57:03.527321 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 00:57:03.527329 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.527337 | orchestrator | 2026-03-28 00:57:03.527345 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-28 00:57:03.527353 | orchestrator | Saturday 28 March 2026 00:46:22 +0000 (0:00:00.390) 0:01:12.221 ******** 2026-03-28 00:57:03.527361 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 00:57:03.527368 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 00:57:03.527376 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 00:57:03.527384 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.527392 | orchestrator | 2026-03-28 00:57:03.527400 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-28 00:57:03.527407 | orchestrator | Saturday 28 March 2026 00:46:22 +0000 (0:00:00.493) 0:01:12.715 ******** 2026-03-28 00:57:03.527415 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 00:57:03.527423 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 00:57:03.527431 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 00:57:03.527439 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.527460 | orchestrator | 2026-03-28 00:57:03.527468 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-28 00:57:03.527476 | orchestrator | Saturday 28 March 2026 00:46:23 +0000 (0:00:00.576) 0:01:13.291 ******** 2026-03-28 00:57:03.527484 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.527492 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.527500 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.527507 | orchestrator | 2026-03-28 00:57:03.527515 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-28 00:57:03.527523 | orchestrator | Saturday 28 March 2026 00:46:23 +0000 (0:00:00.445) 0:01:13.737 ******** 2026-03-28 00:57:03.527530 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-28 00:57:03.527538 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-28 00:57:03.527546 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-28 00:57:03.527554 | orchestrator | 2026-03-28 00:57:03.527561 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-28 00:57:03.527569 | orchestrator | Saturday 28 March 2026 00:46:25 +0000 (0:00:01.830) 0:01:15.567 ******** 2026-03-28 00:57:03.527577 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 00:57:03.527585 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 00:57:03.527592 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 00:57:03.527600 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-28 00:57:03.527608 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-28 00:57:03.527616 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-28 00:57:03.527670 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-28 00:57:03.527678 | orchestrator | 2026-03-28 00:57:03.527686 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-28 00:57:03.527694 | orchestrator | Saturday 28 March 2026 00:46:27 +0000 (0:00:02.238) 0:01:17.805 ******** 2026-03-28 00:57:03.527702 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 00:57:03.527709 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 00:57:03.527717 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 00:57:03.527725 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-28 00:57:03.527733 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-28 00:57:03.527741 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-28 00:57:03.527748 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-28 00:57:03.527756 | orchestrator | 2026-03-28 00:57:03.527769 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-28 00:57:03.527778 | orchestrator | Saturday 28 March 2026 00:46:30 +0000 (0:00:02.092) 0:01:19.898 ******** 2026-03-28 00:57:03.527786 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:03.527794 | orchestrator | 2026-03-28 00:57:03.527802 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-28 00:57:03.527813 | orchestrator | Saturday 28 March 2026 00:46:31 +0000 (0:00:01.201) 0:01:21.100 ******** 2026-03-28 00:57:03.527821 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:03.527830 | orchestrator | 2026-03-28 00:57:03.527837 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-28 00:57:03.527845 | orchestrator | Saturday 28 March 2026 00:46:32 +0000 (0:00:01.423) 0:01:22.524 ******** 2026-03-28 00:57:03.527853 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.527861 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.527869 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.527876 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.527884 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.527892 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.527900 | orchestrator | 2026-03-28 00:57:03.527907 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-28 00:57:03.527915 | orchestrator | Saturday 28 March 2026 00:46:34 +0000 (0:00:01.754) 0:01:24.278 ******** 2026-03-28 00:57:03.527923 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.527931 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.527938 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.527946 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.527953 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.527961 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.527969 | orchestrator | 2026-03-28 00:57:03.527977 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-28 00:57:03.527985 | orchestrator | Saturday 28 March 2026 00:46:35 +0000 (0:00:00.905) 0:01:25.184 ******** 2026-03-28 00:57:03.527993 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.528000 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.528008 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.528016 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.528024 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.528032 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.528039 | orchestrator | 2026-03-28 00:57:03.528047 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-28 00:57:03.528060 | orchestrator | Saturday 28 March 2026 00:46:36 +0000 (0:00:00.787) 0:01:25.972 ******** 2026-03-28 00:57:03.528068 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.528076 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.528083 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.528091 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.528099 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.528106 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.528114 | orchestrator | 2026-03-28 00:57:03.528122 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-28 00:57:03.528130 | orchestrator | Saturday 28 March 2026 00:46:37 +0000 (0:00:01.260) 0:01:27.232 ******** 2026-03-28 00:57:03.528137 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.528145 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.528153 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.528161 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.528169 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.528176 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.528184 | orchestrator | 2026-03-28 00:57:03.528192 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-28 00:57:03.528200 | orchestrator | Saturday 28 March 2026 00:46:38 +0000 (0:00:01.302) 0:01:28.535 ******** 2026-03-28 00:57:03.528207 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.528215 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.528223 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.528231 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.528239 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.528246 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.528254 | orchestrator | 2026-03-28 00:57:03.528262 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-28 00:57:03.528270 | orchestrator | Saturday 28 March 2026 00:46:39 +0000 (0:00:01.284) 0:01:29.819 ******** 2026-03-28 00:57:03.528278 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.528286 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.528294 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.528301 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.528309 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.528317 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.528324 | orchestrator | 2026-03-28 00:57:03.528332 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-28 00:57:03.528340 | orchestrator | Saturday 28 March 2026 00:46:41 +0000 (0:00:01.443) 0:01:31.263 ******** 2026-03-28 00:57:03.528348 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.528356 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.528364 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.528372 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.528379 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.528387 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.528394 | orchestrator | 2026-03-28 00:57:03.528402 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-28 00:57:03.528410 | orchestrator | Saturday 28 March 2026 00:46:44 +0000 (0:00:03.034) 0:01:34.297 ******** 2026-03-28 00:57:03.528418 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.528426 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.528433 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.528441 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.528488 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.528496 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.528504 | orchestrator | 2026-03-28 00:57:03.528517 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-28 00:57:03.528525 | orchestrator | Saturday 28 March 2026 00:46:46 +0000 (0:00:02.339) 0:01:36.636 ******** 2026-03-28 00:57:03.528533 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.528541 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.528554 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.528562 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.528570 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.528578 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.528586 | orchestrator | 2026-03-28 00:57:03.528594 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-28 00:57:03.528606 | orchestrator | Saturday 28 March 2026 00:46:48 +0000 (0:00:01.571) 0:01:38.207 ******** 2026-03-28 00:57:03.528614 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.528622 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.528630 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.528638 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.528646 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.528653 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.528661 | orchestrator | 2026-03-28 00:57:03.528669 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-28 00:57:03.528677 | orchestrator | Saturday 28 March 2026 00:46:49 +0000 (0:00:01.162) 0:01:39.371 ******** 2026-03-28 00:57:03.528685 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.528693 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.528701 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.528709 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.528717 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.528724 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.528733 | orchestrator | 2026-03-28 00:57:03.528739 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-28 00:57:03.528746 | orchestrator | Saturday 28 March 2026 00:46:50 +0000 (0:00:01.121) 0:01:40.492 ******** 2026-03-28 00:57:03.528753 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.528759 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.528766 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.528777 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.528788 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.528798 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.528807 | orchestrator | 2026-03-28 00:57:03.528817 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-28 00:57:03.528828 | orchestrator | Saturday 28 March 2026 00:46:51 +0000 (0:00:00.999) 0:01:41.492 ******** 2026-03-28 00:57:03.528838 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.528848 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.528859 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.528871 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.528883 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.528896 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.528908 | orchestrator | 2026-03-28 00:57:03.528921 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-28 00:57:03.528933 | orchestrator | Saturday 28 March 2026 00:46:52 +0000 (0:00:00.876) 0:01:42.369 ******** 2026-03-28 00:57:03.528945 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.528957 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.528969 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.528981 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.528993 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.529006 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.529018 | orchestrator | 2026-03-28 00:57:03.529030 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-28 00:57:03.529043 | orchestrator | Saturday 28 March 2026 00:46:53 +0000 (0:00:00.765) 0:01:43.134 ******** 2026-03-28 00:57:03.529055 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.529067 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.529080 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.529092 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.529104 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.529117 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.529136 | orchestrator | 2026-03-28 00:57:03.529149 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-28 00:57:03.529162 | orchestrator | Saturday 28 March 2026 00:46:54 +0000 (0:00:00.841) 0:01:43.976 ******** 2026-03-28 00:57:03.529174 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.529186 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.529198 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.529210 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.529222 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.529234 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.529247 | orchestrator | 2026-03-28 00:57:03.529259 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-28 00:57:03.529271 | orchestrator | Saturday 28 March 2026 00:46:54 +0000 (0:00:00.664) 0:01:44.640 ******** 2026-03-28 00:57:03.529283 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.529296 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.529308 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.529320 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.529332 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.529345 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.529357 | orchestrator | 2026-03-28 00:57:03.529369 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-28 00:57:03.529382 | orchestrator | Saturday 28 March 2026 00:46:55 +0000 (0:00:00.889) 0:01:45.530 ******** 2026-03-28 00:57:03.529394 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.529406 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.529418 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.529429 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.529440 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.529475 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.529486 | orchestrator | 2026-03-28 00:57:03.529498 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-28 00:57:03.529509 | orchestrator | Saturday 28 March 2026 00:46:56 +0000 (0:00:01.258) 0:01:46.789 ******** 2026-03-28 00:57:03.529520 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:57:03.529530 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:57:03.529541 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:03.529553 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:57:03.529560 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:03.529567 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:03.529573 | orchestrator | 2026-03-28 00:57:03.529586 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-28 00:57:03.529593 | orchestrator | Saturday 28 March 2026 00:46:58 +0000 (0:00:01.776) 0:01:48.565 ******** 2026-03-28 00:57:03.529600 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:57:03.529606 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:57:03.529613 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:03.529619 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:57:03.529626 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:03.529632 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:03.529639 | orchestrator | 2026-03-28 00:57:03.529645 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-28 00:57:03.529660 | orchestrator | Saturday 28 March 2026 00:47:01 +0000 (0:00:02.613) 0:01:51.178 ******** 2026-03-28 00:57:03.529667 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:03.529674 | orchestrator | 2026-03-28 00:57:03.529680 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-28 00:57:03.529687 | orchestrator | Saturday 28 March 2026 00:47:02 +0000 (0:00:01.256) 0:01:52.435 ******** 2026-03-28 00:57:03.529694 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.529700 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.529706 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.529719 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.529726 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.529732 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.529739 | orchestrator | 2026-03-28 00:57:03.529745 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-28 00:57:03.529752 | orchestrator | Saturday 28 March 2026 00:47:03 +0000 (0:00:00.577) 0:01:53.012 ******** 2026-03-28 00:57:03.529758 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.529765 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.529771 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.529777 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.529784 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.529790 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.529797 | orchestrator | 2026-03-28 00:57:03.529803 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-28 00:57:03.529810 | orchestrator | Saturday 28 March 2026 00:47:03 +0000 (0:00:00.819) 0:01:53.832 ******** 2026-03-28 00:57:03.529817 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-28 00:57:03.529823 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-28 00:57:03.529830 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-28 00:57:03.529836 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-28 00:57:03.529843 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-28 00:57:03.529849 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-28 00:57:03.529856 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-28 00:57:03.529863 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-28 00:57:03.529869 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-28 00:57:03.529876 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-28 00:57:03.529882 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-28 00:57:03.529888 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-28 00:57:03.529895 | orchestrator | 2026-03-28 00:57:03.529905 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-28 00:57:03.529915 | orchestrator | Saturday 28 March 2026 00:47:05 +0000 (0:00:01.377) 0:01:55.210 ******** 2026-03-28 00:57:03.529925 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:57:03.529936 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:57:03.529947 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:57:03.529958 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:03.529970 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:03.529982 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:03.529994 | orchestrator | 2026-03-28 00:57:03.530005 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-28 00:57:03.530057 | orchestrator | Saturday 28 March 2026 00:47:06 +0000 (0:00:01.159) 0:01:56.369 ******** 2026-03-28 00:57:03.530066 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.530073 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.530080 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.530086 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.530093 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.530099 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.530111 | orchestrator | 2026-03-28 00:57:03.530123 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-28 00:57:03.530135 | orchestrator | Saturday 28 March 2026 00:47:07 +0000 (0:00:00.641) 0:01:57.011 ******** 2026-03-28 00:57:03.530147 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.530169 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.530181 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.530193 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.530205 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.530217 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.530230 | orchestrator | 2026-03-28 00:57:03.530241 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-28 00:57:03.530254 | orchestrator | Saturday 28 March 2026 00:47:08 +0000 (0:00:00.861) 0:01:57.873 ******** 2026-03-28 00:57:03.530266 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.530290 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.530302 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.530314 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.530326 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.530338 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.530351 | orchestrator | 2026-03-28 00:57:03.530363 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-28 00:57:03.530376 | orchestrator | Saturday 28 March 2026 00:47:08 +0000 (0:00:00.582) 0:01:58.456 ******** 2026-03-28 00:57:03.530393 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:03.530406 | orchestrator | 2026-03-28 00:57:03.530418 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-28 00:57:03.530430 | orchestrator | Saturday 28 March 2026 00:47:09 +0000 (0:00:01.225) 0:01:59.681 ******** 2026-03-28 00:57:03.530442 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.530474 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.530486 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.530498 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.530510 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.530522 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.530535 | orchestrator | 2026-03-28 00:57:03.530547 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-28 00:57:03.530559 | orchestrator | Saturday 28 March 2026 00:48:17 +0000 (0:01:07.750) 0:03:07.432 ******** 2026-03-28 00:57:03.530571 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-28 00:57:03.530584 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-28 00:57:03.530596 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-28 00:57:03.530608 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.530620 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-28 00:57:03.530632 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-28 00:57:03.530644 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-28 00:57:03.530656 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.530668 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-28 00:57:03.530680 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-28 00:57:03.530692 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-28 00:57:03.530704 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.530717 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-28 00:57:03.530729 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-28 00:57:03.530741 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-28 00:57:03.530753 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.530765 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-28 00:57:03.530777 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-28 00:57:03.530797 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-28 00:57:03.530810 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.530822 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-28 00:57:03.530834 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-28 00:57:03.530846 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-28 00:57:03.530859 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.530871 | orchestrator | 2026-03-28 00:57:03.530883 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-28 00:57:03.530895 | orchestrator | Saturday 28 March 2026 00:48:18 +0000 (0:00:01.154) 0:03:08.586 ******** 2026-03-28 00:57:03.530907 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.530919 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.530931 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.530944 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.530956 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.530968 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.530980 | orchestrator | 2026-03-28 00:57:03.530992 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-28 00:57:03.531004 | orchestrator | Saturday 28 March 2026 00:48:20 +0000 (0:00:01.575) 0:03:10.161 ******** 2026-03-28 00:57:03.531017 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.531029 | orchestrator | 2026-03-28 00:57:03.531041 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-28 00:57:03.531053 | orchestrator | Saturday 28 March 2026 00:48:20 +0000 (0:00:00.197) 0:03:10.359 ******** 2026-03-28 00:57:03.531065 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.531078 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.531090 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.531102 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.531114 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.531126 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.531138 | orchestrator | 2026-03-28 00:57:03.531150 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-28 00:57:03.531162 | orchestrator | Saturday 28 March 2026 00:48:21 +0000 (0:00:01.141) 0:03:11.500 ******** 2026-03-28 00:57:03.531175 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.531187 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.531199 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.531211 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.531253 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.531282 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.531294 | orchestrator | 2026-03-28 00:57:03.531307 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-28 00:57:03.531319 | orchestrator | Saturday 28 March 2026 00:48:23 +0000 (0:00:01.718) 0:03:13.219 ******** 2026-03-28 00:57:03.531331 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.531343 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.531355 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.531368 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.531380 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.531392 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.531404 | orchestrator | 2026-03-28 00:57:03.531423 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-28 00:57:03.531435 | orchestrator | Saturday 28 March 2026 00:48:24 +0000 (0:00:01.339) 0:03:14.558 ******** 2026-03-28 00:57:03.531496 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.531509 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.531521 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.531533 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.531545 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.531566 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.531578 | orchestrator | 2026-03-28 00:57:03.531590 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-28 00:57:03.531602 | orchestrator | Saturday 28 March 2026 00:48:27 +0000 (0:00:03.221) 0:03:17.780 ******** 2026-03-28 00:57:03.531614 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.531626 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.531639 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.531648 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.531655 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.531661 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.531667 | orchestrator | 2026-03-28 00:57:03.531674 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-28 00:57:03.531681 | orchestrator | Saturday 28 March 2026 00:48:28 +0000 (0:00:00.910) 0:03:18.690 ******** 2026-03-28 00:57:03.531688 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:03.531695 | orchestrator | 2026-03-28 00:57:03.531702 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-28 00:57:03.531709 | orchestrator | Saturday 28 March 2026 00:48:30 +0000 (0:00:01.914) 0:03:20.604 ******** 2026-03-28 00:57:03.531715 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.531723 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.531734 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.531745 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.531757 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.531768 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.531780 | orchestrator | 2026-03-28 00:57:03.531792 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-28 00:57:03.531803 | orchestrator | Saturday 28 March 2026 00:48:32 +0000 (0:00:01.305) 0:03:21.909 ******** 2026-03-28 00:57:03.531812 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.531818 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.531825 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.531831 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.531838 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.531844 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.531851 | orchestrator | 2026-03-28 00:57:03.531857 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-28 00:57:03.531863 | orchestrator | Saturday 28 March 2026 00:48:33 +0000 (0:00:01.183) 0:03:23.092 ******** 2026-03-28 00:57:03.531869 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.531875 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.531881 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.531887 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.531893 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.531899 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.531905 | orchestrator | 2026-03-28 00:57:03.531911 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-28 00:57:03.531917 | orchestrator | Saturday 28 March 2026 00:48:34 +0000 (0:00:00.819) 0:03:23.912 ******** 2026-03-28 00:57:03.531923 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.531929 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.531935 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.531941 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.531947 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.531953 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.531959 | orchestrator | 2026-03-28 00:57:03.531965 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-28 00:57:03.531971 | orchestrator | Saturday 28 March 2026 00:48:35 +0000 (0:00:01.312) 0:03:25.224 ******** 2026-03-28 00:57:03.531977 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.531988 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.531994 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.532000 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.532006 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.532012 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.532018 | orchestrator | 2026-03-28 00:57:03.532024 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-28 00:57:03.532031 | orchestrator | Saturday 28 March 2026 00:48:36 +0000 (0:00:00.986) 0:03:26.211 ******** 2026-03-28 00:57:03.532037 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.532043 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.532049 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.532055 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.532061 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.532067 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.532073 | orchestrator | 2026-03-28 00:57:03.532079 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-28 00:57:03.532085 | orchestrator | Saturday 28 March 2026 00:48:37 +0000 (0:00:01.243) 0:03:27.454 ******** 2026-03-28 00:57:03.532091 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.532097 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.532109 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.532115 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.532122 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.532128 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.532134 | orchestrator | 2026-03-28 00:57:03.532140 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-28 00:57:03.532146 | orchestrator | Saturday 28 March 2026 00:48:38 +0000 (0:00:00.994) 0:03:28.448 ******** 2026-03-28 00:57:03.532152 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.532158 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.532164 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.532170 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.532176 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.532183 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.532189 | orchestrator | 2026-03-28 00:57:03.532195 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-28 00:57:03.532201 | orchestrator | Saturday 28 March 2026 00:48:39 +0000 (0:00:01.113) 0:03:29.562 ******** 2026-03-28 00:57:03.532207 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.532213 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.532219 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.532226 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.532232 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.532238 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.532244 | orchestrator | 2026-03-28 00:57:03.532250 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-28 00:57:03.532256 | orchestrator | Saturday 28 March 2026 00:48:41 +0000 (0:00:02.136) 0:03:31.698 ******** 2026-03-28 00:57:03.532262 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:03.532268 | orchestrator | 2026-03-28 00:57:03.532301 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-28 00:57:03.532308 | orchestrator | Saturday 28 March 2026 00:48:43 +0000 (0:00:01.435) 0:03:33.134 ******** 2026-03-28 00:57:03.532314 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-03-28 00:57:03.532320 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-03-28 00:57:03.532326 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-03-28 00:57:03.532332 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-03-28 00:57:03.532338 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-03-28 00:57:03.532345 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-03-28 00:57:03.532355 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-03-28 00:57:03.532361 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-03-28 00:57:03.532367 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-03-28 00:57:03.532373 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-03-28 00:57:03.532379 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-03-28 00:57:03.532385 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-03-28 00:57:03.532391 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-03-28 00:57:03.532397 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-03-28 00:57:03.532403 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-03-28 00:57:03.532409 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-03-28 00:57:03.532416 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-03-28 00:57:03.532422 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-03-28 00:57:03.532428 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-03-28 00:57:03.532434 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-03-28 00:57:03.532440 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-03-28 00:57:03.532464 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-03-28 00:57:03.532470 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-03-28 00:57:03.532477 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-03-28 00:57:03.532483 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-03-28 00:57:03.532489 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-03-28 00:57:03.532495 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-03-28 00:57:03.532501 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-03-28 00:57:03.532507 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-03-28 00:57:03.532513 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-03-28 00:57:03.532519 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-03-28 00:57:03.532525 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-03-28 00:57:03.532531 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-03-28 00:57:03.532537 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-03-28 00:57:03.532543 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-03-28 00:57:03.532549 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-03-28 00:57:03.532556 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-03-28 00:57:03.532562 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-03-28 00:57:03.532568 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-03-28 00:57:03.532578 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-03-28 00:57:03.532589 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-03-28 00:57:03.532598 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-03-28 00:57:03.532613 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-28 00:57:03.532622 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-03-28 00:57:03.532631 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-03-28 00:57:03.532641 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-28 00:57:03.532652 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-28 00:57:03.532662 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-28 00:57:03.532672 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-03-28 00:57:03.532694 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-03-28 00:57:03.532704 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-03-28 00:57:03.532714 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-28 00:57:03.532724 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-28 00:57:03.532734 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-28 00:57:03.532746 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-03-28 00:57:03.532758 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-28 00:57:03.532769 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-28 00:57:03.532780 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-28 00:57:03.532790 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-28 00:57:03.532800 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-28 00:57:03.532811 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-28 00:57:03.532822 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-28 00:57:03.532833 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-28 00:57:03.532843 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-28 00:57:03.532853 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-28 00:57:03.532864 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-28 00:57:03.532874 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-28 00:57:03.532885 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-28 00:57:03.532897 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-28 00:57:03.532908 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-28 00:57:03.532919 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-28 00:57:03.532930 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-28 00:57:03.532940 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-28 00:57:03.532950 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-28 00:57:03.532962 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-28 00:57:03.532972 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-03-28 00:57:03.532982 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-28 00:57:03.532992 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-28 00:57:03.533003 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-28 00:57:03.533014 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-28 00:57:03.533024 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-28 00:57:03.533035 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-03-28 00:57:03.533046 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-03-28 00:57:03.533056 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-28 00:57:03.533068 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-28 00:57:03.533080 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-28 00:57:03.533091 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-03-28 00:57:03.533101 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-03-28 00:57:03.533112 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-03-28 00:57:03.533130 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-28 00:57:03.533142 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-03-28 00:57:03.533155 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-03-28 00:57:03.533166 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-03-28 00:57:03.533177 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-03-28 00:57:03.533189 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-03-28 00:57:03.533200 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-03-28 00:57:03.533212 | orchestrator | 2026-03-28 00:57:03.533224 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-28 00:57:03.533235 | orchestrator | Saturday 28 March 2026 00:48:51 +0000 (0:00:07.772) 0:03:40.906 ******** 2026-03-28 00:57:03.533247 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.533259 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.533277 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.533288 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:57:03.533300 | orchestrator | 2026-03-28 00:57:03.533311 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-28 00:57:03.533322 | orchestrator | Saturday 28 March 2026 00:48:52 +0000 (0:00:01.512) 0:03:42.418 ******** 2026-03-28 00:57:03.533333 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-28 00:57:03.533351 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-28 00:57:03.533362 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-28 00:57:03.533374 | orchestrator | 2026-03-28 00:57:03.533386 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-28 00:57:03.533400 | orchestrator | Saturday 28 March 2026 00:48:53 +0000 (0:00:00.818) 0:03:43.237 ******** 2026-03-28 00:57:03.533407 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-28 00:57:03.533414 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-28 00:57:03.533420 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-28 00:57:03.533426 | orchestrator | 2026-03-28 00:57:03.533432 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-28 00:57:03.533438 | orchestrator | Saturday 28 March 2026 00:48:55 +0000 (0:00:01.650) 0:03:44.887 ******** 2026-03-28 00:57:03.533464 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.533470 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.533477 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.533483 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.533489 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.533495 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.533501 | orchestrator | 2026-03-28 00:57:03.533508 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-28 00:57:03.533514 | orchestrator | Saturday 28 March 2026 00:48:55 +0000 (0:00:00.745) 0:03:45.633 ******** 2026-03-28 00:57:03.533520 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.533526 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.533532 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.533538 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.533544 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.533550 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.533563 | orchestrator | 2026-03-28 00:57:03.533569 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-28 00:57:03.533575 | orchestrator | Saturday 28 March 2026 00:48:56 +0000 (0:00:00.666) 0:03:46.300 ******** 2026-03-28 00:57:03.533581 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.533587 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.533593 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.533599 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.533605 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.533612 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.533618 | orchestrator | 2026-03-28 00:57:03.533624 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-28 00:57:03.533630 | orchestrator | Saturday 28 March 2026 00:48:57 +0000 (0:00:01.044) 0:03:47.344 ******** 2026-03-28 00:57:03.533636 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.533642 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.533648 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.533654 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.533660 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.533666 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.533673 | orchestrator | 2026-03-28 00:57:03.533679 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-28 00:57:03.533685 | orchestrator | Saturday 28 March 2026 00:48:58 +0000 (0:00:00.639) 0:03:47.984 ******** 2026-03-28 00:57:03.533691 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.533697 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.533703 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.533709 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.533715 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.533721 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.533727 | orchestrator | 2026-03-28 00:57:03.533734 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-28 00:57:03.533740 | orchestrator | Saturday 28 March 2026 00:48:59 +0000 (0:00:01.153) 0:03:49.137 ******** 2026-03-28 00:57:03.533746 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.533752 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.533758 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.533764 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.533770 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.533776 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.533783 | orchestrator | 2026-03-28 00:57:03.533789 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-28 00:57:03.533795 | orchestrator | Saturday 28 March 2026 00:48:59 +0000 (0:00:00.698) 0:03:49.836 ******** 2026-03-28 00:57:03.533801 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.533807 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.533813 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.533819 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.533825 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.533831 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.533837 | orchestrator | 2026-03-28 00:57:03.533849 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-28 00:57:03.533856 | orchestrator | Saturday 28 March 2026 00:49:01 +0000 (0:00:01.084) 0:03:50.921 ******** 2026-03-28 00:57:03.533862 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.533868 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.533874 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.533880 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.533886 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.533892 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.533898 | orchestrator | 2026-03-28 00:57:03.533912 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-28 00:57:03.533924 | orchestrator | Saturday 28 March 2026 00:49:01 +0000 (0:00:00.813) 0:03:51.734 ******** 2026-03-28 00:57:03.533930 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.533936 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.533942 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.533948 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.533955 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.533961 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.533967 | orchestrator | 2026-03-28 00:57:03.533976 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-28 00:57:03.533986 | orchestrator | Saturday 28 March 2026 00:49:04 +0000 (0:00:02.915) 0:03:54.649 ******** 2026-03-28 00:57:03.533996 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.534005 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.534062 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.534073 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.534079 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.534085 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.534091 | orchestrator | 2026-03-28 00:57:03.534097 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-28 00:57:03.534103 | orchestrator | Saturday 28 March 2026 00:49:05 +0000 (0:00:00.851) 0:03:55.501 ******** 2026-03-28 00:57:03.534109 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.534116 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.534122 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.534128 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.534134 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.534140 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.534146 | orchestrator | 2026-03-28 00:57:03.534152 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-28 00:57:03.534159 | orchestrator | Saturday 28 March 2026 00:49:06 +0000 (0:00:01.312) 0:03:56.813 ******** 2026-03-28 00:57:03.534165 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.534171 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.534177 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.534183 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.534189 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.534195 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.534201 | orchestrator | 2026-03-28 00:57:03.534207 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-28 00:57:03.534214 | orchestrator | Saturday 28 March 2026 00:49:07 +0000 (0:00:00.916) 0:03:57.729 ******** 2026-03-28 00:57:03.534220 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-28 00:57:03.534226 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-28 00:57:03.534232 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-28 00:57:03.534238 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.534244 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.534250 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.534256 | orchestrator | 2026-03-28 00:57:03.534262 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-28 00:57:03.534269 | orchestrator | Saturday 28 March 2026 00:49:09 +0000 (0:00:01.236) 0:03:58.966 ******** 2026-03-28 00:57:03.534277 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-03-28 00:57:03.534286 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-03-28 00:57:03.534301 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.534307 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-03-28 00:57:03.534314 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-03-28 00:57:03.534333 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.534340 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-03-28 00:57:03.534350 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-03-28 00:57:03.534357 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.534363 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.534369 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.534375 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.534381 | orchestrator | 2026-03-28 00:57:03.534387 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-28 00:57:03.534393 | orchestrator | Saturday 28 March 2026 00:49:10 +0000 (0:00:00.927) 0:03:59.893 ******** 2026-03-28 00:57:03.534400 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.534406 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.534412 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.534418 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.534424 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.534430 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.534436 | orchestrator | 2026-03-28 00:57:03.534442 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-28 00:57:03.534470 | orchestrator | Saturday 28 March 2026 00:49:11 +0000 (0:00:01.287) 0:04:01.181 ******** 2026-03-28 00:57:03.534480 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.534487 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.534493 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.534499 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.534505 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.534511 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.534517 | orchestrator | 2026-03-28 00:57:03.534523 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-28 00:57:03.534529 | orchestrator | Saturday 28 March 2026 00:49:12 +0000 (0:00:00.707) 0:04:01.889 ******** 2026-03-28 00:57:03.534535 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.534541 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.534547 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.534553 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.534559 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.534565 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.534572 | orchestrator | 2026-03-28 00:57:03.534578 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-28 00:57:03.534589 | orchestrator | Saturday 28 March 2026 00:49:13 +0000 (0:00:01.084) 0:04:02.974 ******** 2026-03-28 00:57:03.534596 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.534602 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.534608 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.534614 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.534620 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.534626 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.534632 | orchestrator | 2026-03-28 00:57:03.534640 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-28 00:57:03.534651 | orchestrator | Saturday 28 March 2026 00:49:13 +0000 (0:00:00.673) 0:04:03.647 ******** 2026-03-28 00:57:03.534660 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.534670 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.534680 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.534689 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.534699 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.534709 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.534719 | orchestrator | 2026-03-28 00:57:03.534730 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-28 00:57:03.534741 | orchestrator | Saturday 28 March 2026 00:49:14 +0000 (0:00:00.953) 0:04:04.600 ******** 2026-03-28 00:57:03.534751 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.534761 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.534771 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.534777 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.534783 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.534789 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.534795 | orchestrator | 2026-03-28 00:57:03.534801 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-28 00:57:03.534807 | orchestrator | Saturday 28 March 2026 00:49:15 +0000 (0:00:00.891) 0:04:05.492 ******** 2026-03-28 00:57:03.534813 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 00:57:03.534820 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 00:57:03.534826 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 00:57:03.534832 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.534838 | orchestrator | 2026-03-28 00:57:03.534844 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-28 00:57:03.534851 | orchestrator | Saturday 28 March 2026 00:49:16 +0000 (0:00:00.513) 0:04:06.005 ******** 2026-03-28 00:57:03.534857 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 00:57:03.534863 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 00:57:03.534869 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 00:57:03.534875 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.534881 | orchestrator | 2026-03-28 00:57:03.534887 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-28 00:57:03.534900 | orchestrator | Saturday 28 March 2026 00:49:16 +0000 (0:00:00.814) 0:04:06.820 ******** 2026-03-28 00:57:03.534906 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 00:57:03.534912 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 00:57:03.534918 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 00:57:03.534925 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.534931 | orchestrator | 2026-03-28 00:57:03.534937 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-28 00:57:03.534943 | orchestrator | Saturday 28 March 2026 00:49:17 +0000 (0:00:00.813) 0:04:07.634 ******** 2026-03-28 00:57:03.534949 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.534960 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.534966 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.534978 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.534984 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.534991 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.534997 | orchestrator | 2026-03-28 00:57:03.535003 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-28 00:57:03.535009 | orchestrator | Saturday 28 March 2026 00:49:18 +0000 (0:00:01.194) 0:04:08.829 ******** 2026-03-28 00:57:03.535015 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-28 00:57:03.535021 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-28 00:57:03.535027 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-03-28 00:57:03.535033 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-28 00:57:03.535039 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.535045 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-03-28 00:57:03.535051 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.535057 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-03-28 00:57:03.535063 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.535069 | orchestrator | 2026-03-28 00:57:03.535075 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-28 00:57:03.535082 | orchestrator | Saturday 28 March 2026 00:49:22 +0000 (0:00:03.106) 0:04:11.935 ******** 2026-03-28 00:57:03.535088 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:57:03.535094 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:57:03.535100 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:57:03.535106 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:03.535112 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:03.535118 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:03.535124 | orchestrator | 2026-03-28 00:57:03.535130 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-28 00:57:03.535137 | orchestrator | Saturday 28 March 2026 00:49:24 +0000 (0:00:02.883) 0:04:14.819 ******** 2026-03-28 00:57:03.535143 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:57:03.535149 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:57:03.535155 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:57:03.535161 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:03.535167 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:03.535173 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:03.535179 | orchestrator | 2026-03-28 00:57:03.535185 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-28 00:57:03.535191 | orchestrator | Saturday 28 March 2026 00:49:26 +0000 (0:00:01.728) 0:04:16.547 ******** 2026-03-28 00:57:03.535197 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.535204 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.535210 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.535216 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:03.535223 | orchestrator | 2026-03-28 00:57:03.535229 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-28 00:57:03.535235 | orchestrator | Saturday 28 March 2026 00:49:28 +0000 (0:00:01.441) 0:04:17.989 ******** 2026-03-28 00:57:03.535241 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.535247 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.535253 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.535259 | orchestrator | 2026-03-28 00:57:03.535266 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-28 00:57:03.535272 | orchestrator | Saturday 28 March 2026 00:49:28 +0000 (0:00:00.608) 0:04:18.599 ******** 2026-03-28 00:57:03.535278 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:03.535284 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:03.535290 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:03.535296 | orchestrator | 2026-03-28 00:57:03.535303 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-28 00:57:03.535309 | orchestrator | Saturday 28 March 2026 00:49:30 +0000 (0:00:01.344) 0:04:19.943 ******** 2026-03-28 00:57:03.535320 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-28 00:57:03.535326 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-28 00:57:03.535332 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-28 00:57:03.535338 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.535344 | orchestrator | 2026-03-28 00:57:03.535350 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-28 00:57:03.535356 | orchestrator | Saturday 28 March 2026 00:49:31 +0000 (0:00:00.960) 0:04:20.904 ******** 2026-03-28 00:57:03.535362 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.535369 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.535375 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.535381 | orchestrator | 2026-03-28 00:57:03.535387 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-28 00:57:03.535393 | orchestrator | Saturday 28 March 2026 00:49:31 +0000 (0:00:00.661) 0:04:21.566 ******** 2026-03-28 00:57:03.535399 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.535406 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.535412 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.535418 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:57:03.535424 | orchestrator | 2026-03-28 00:57:03.535430 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-28 00:57:03.535440 | orchestrator | Saturday 28 March 2026 00:49:32 +0000 (0:00:01.185) 0:04:22.751 ******** 2026-03-28 00:57:03.535493 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 00:57:03.535500 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 00:57:03.535507 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 00:57:03.535513 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.535519 | orchestrator | 2026-03-28 00:57:03.535525 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-28 00:57:03.535532 | orchestrator | Saturday 28 March 2026 00:49:33 +0000 (0:00:00.423) 0:04:23.174 ******** 2026-03-28 00:57:03.535538 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.535548 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.535555 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.535561 | orchestrator | 2026-03-28 00:57:03.535567 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-28 00:57:03.535573 | orchestrator | Saturday 28 March 2026 00:49:33 +0000 (0:00:00.342) 0:04:23.517 ******** 2026-03-28 00:57:03.535579 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.535585 | orchestrator | 2026-03-28 00:57:03.535592 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-28 00:57:03.535598 | orchestrator | Saturday 28 March 2026 00:49:34 +0000 (0:00:00.659) 0:04:24.177 ******** 2026-03-28 00:57:03.535604 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.535610 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.535616 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.535622 | orchestrator | 2026-03-28 00:57:03.535628 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-28 00:57:03.535634 | orchestrator | Saturday 28 March 2026 00:49:34 +0000 (0:00:00.331) 0:04:24.509 ******** 2026-03-28 00:57:03.535640 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.535646 | orchestrator | 2026-03-28 00:57:03.535652 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-28 00:57:03.535659 | orchestrator | Saturday 28 March 2026 00:49:34 +0000 (0:00:00.192) 0:04:24.701 ******** 2026-03-28 00:57:03.535665 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.535671 | orchestrator | 2026-03-28 00:57:03.535677 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-28 00:57:03.535683 | orchestrator | Saturday 28 March 2026 00:49:35 +0000 (0:00:00.236) 0:04:24.937 ******** 2026-03-28 00:57:03.535694 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.535701 | orchestrator | 2026-03-28 00:57:03.535707 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-28 00:57:03.535713 | orchestrator | Saturday 28 March 2026 00:49:35 +0000 (0:00:00.162) 0:04:25.100 ******** 2026-03-28 00:57:03.535719 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.535725 | orchestrator | 2026-03-28 00:57:03.535731 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-28 00:57:03.535740 | orchestrator | Saturday 28 March 2026 00:49:35 +0000 (0:00:00.262) 0:04:25.362 ******** 2026-03-28 00:57:03.535751 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.535761 | orchestrator | 2026-03-28 00:57:03.535771 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-28 00:57:03.535783 | orchestrator | Saturday 28 March 2026 00:49:35 +0000 (0:00:00.252) 0:04:25.614 ******** 2026-03-28 00:57:03.535794 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 00:57:03.535805 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 00:57:03.535812 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 00:57:03.535818 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.535824 | orchestrator | 2026-03-28 00:57:03.535830 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-28 00:57:03.535836 | orchestrator | Saturday 28 March 2026 00:49:36 +0000 (0:00:00.514) 0:04:26.129 ******** 2026-03-28 00:57:03.535843 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.535849 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.535855 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.535861 | orchestrator | 2026-03-28 00:57:03.535867 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-28 00:57:03.535873 | orchestrator | Saturday 28 March 2026 00:49:36 +0000 (0:00:00.699) 0:04:26.829 ******** 2026-03-28 00:57:03.535879 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.535885 | orchestrator | 2026-03-28 00:57:03.535891 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-28 00:57:03.535898 | orchestrator | Saturday 28 March 2026 00:49:37 +0000 (0:00:00.277) 0:04:27.106 ******** 2026-03-28 00:57:03.535904 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.535910 | orchestrator | 2026-03-28 00:57:03.535916 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-28 00:57:03.535922 | orchestrator | Saturday 28 March 2026 00:49:37 +0000 (0:00:00.324) 0:04:27.431 ******** 2026-03-28 00:57:03.535928 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.535935 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.535941 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.535947 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:57:03.535953 | orchestrator | 2026-03-28 00:57:03.535960 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-28 00:57:03.535966 | orchestrator | Saturday 28 March 2026 00:49:38 +0000 (0:00:01.034) 0:04:28.465 ******** 2026-03-28 00:57:03.535972 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.535978 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.535984 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.535990 | orchestrator | 2026-03-28 00:57:03.535997 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-28 00:57:03.536003 | orchestrator | Saturday 28 March 2026 00:49:39 +0000 (0:00:00.606) 0:04:29.072 ******** 2026-03-28 00:57:03.536009 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:57:03.536015 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:57:03.536020 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:57:03.536026 | orchestrator | 2026-03-28 00:57:03.536036 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-28 00:57:03.536041 | orchestrator | Saturday 28 March 2026 00:49:40 +0000 (0:00:01.331) 0:04:30.403 ******** 2026-03-28 00:57:03.536052 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 00:57:03.536058 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 00:57:03.536063 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 00:57:03.536068 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.536074 | orchestrator | 2026-03-28 00:57:03.536079 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-28 00:57:03.536085 | orchestrator | Saturday 28 March 2026 00:49:41 +0000 (0:00:00.704) 0:04:31.107 ******** 2026-03-28 00:57:03.536094 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.536099 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.536105 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.536110 | orchestrator | 2026-03-28 00:57:03.536115 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-28 00:57:03.536121 | orchestrator | Saturday 28 March 2026 00:49:41 +0000 (0:00:00.398) 0:04:31.506 ******** 2026-03-28 00:57:03.536126 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.536132 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.536137 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.536142 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:57:03.536148 | orchestrator | 2026-03-28 00:57:03.536153 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-28 00:57:03.536159 | orchestrator | Saturday 28 March 2026 00:49:42 +0000 (0:00:01.247) 0:04:32.754 ******** 2026-03-28 00:57:03.536164 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.536169 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.536175 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.536180 | orchestrator | 2026-03-28 00:57:03.536186 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-28 00:57:03.536191 | orchestrator | Saturday 28 March 2026 00:49:43 +0000 (0:00:00.789) 0:04:33.544 ******** 2026-03-28 00:57:03.536196 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:57:03.536202 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:57:03.536207 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:57:03.536213 | orchestrator | 2026-03-28 00:57:03.536218 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-28 00:57:03.536224 | orchestrator | Saturday 28 March 2026 00:49:45 +0000 (0:00:02.177) 0:04:35.721 ******** 2026-03-28 00:57:03.536229 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 00:57:03.536235 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 00:57:03.536240 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 00:57:03.536245 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.536251 | orchestrator | 2026-03-28 00:57:03.536256 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-28 00:57:03.536262 | orchestrator | Saturday 28 March 2026 00:49:46 +0000 (0:00:00.948) 0:04:36.670 ******** 2026-03-28 00:57:03.536267 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.536272 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.536278 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.536283 | orchestrator | 2026-03-28 00:57:03.536288 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-03-28 00:57:03.536294 | orchestrator | Saturday 28 March 2026 00:49:47 +0000 (0:00:00.485) 0:04:37.155 ******** 2026-03-28 00:57:03.536299 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.536304 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.536310 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.536315 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.536320 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.536326 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.536331 | orchestrator | 2026-03-28 00:57:03.536337 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-28 00:57:03.536347 | orchestrator | Saturday 28 March 2026 00:49:48 +0000 (0:00:00.961) 0:04:38.117 ******** 2026-03-28 00:57:03.536353 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.536358 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.536363 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.536369 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:03.536374 | orchestrator | 2026-03-28 00:57:03.536380 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-28 00:57:03.536385 | orchestrator | Saturday 28 March 2026 00:49:50 +0000 (0:00:01.923) 0:04:40.040 ******** 2026-03-28 00:57:03.536390 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.536396 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.536401 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.536406 | orchestrator | 2026-03-28 00:57:03.536412 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-28 00:57:03.536417 | orchestrator | Saturday 28 March 2026 00:49:50 +0000 (0:00:00.605) 0:04:40.646 ******** 2026-03-28 00:57:03.536423 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:03.536428 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:03.536433 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:03.536438 | orchestrator | 2026-03-28 00:57:03.536463 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-28 00:57:03.536470 | orchestrator | Saturday 28 March 2026 00:49:52 +0000 (0:00:02.068) 0:04:42.715 ******** 2026-03-28 00:57:03.536475 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-28 00:57:03.536481 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-28 00:57:03.536486 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-28 00:57:03.536491 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.536497 | orchestrator | 2026-03-28 00:57:03.536502 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-28 00:57:03.536507 | orchestrator | Saturday 28 March 2026 00:49:53 +0000 (0:00:00.762) 0:04:43.478 ******** 2026-03-28 00:57:03.536513 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.536522 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.536527 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.536533 | orchestrator | 2026-03-28 00:57:03.536538 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-03-28 00:57:03.536544 | orchestrator | 2026-03-28 00:57:03.536549 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-28 00:57:03.536555 | orchestrator | Saturday 28 March 2026 00:49:54 +0000 (0:00:00.959) 0:04:44.438 ******** 2026-03-28 00:57:03.536561 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:03.536566 | orchestrator | 2026-03-28 00:57:03.536576 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-28 00:57:03.536581 | orchestrator | Saturday 28 March 2026 00:49:55 +0000 (0:00:01.153) 0:04:45.592 ******** 2026-03-28 00:57:03.536587 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:03.536592 | orchestrator | 2026-03-28 00:57:03.536598 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-28 00:57:03.536603 | orchestrator | Saturday 28 March 2026 00:49:56 +0000 (0:00:00.779) 0:04:46.371 ******** 2026-03-28 00:57:03.536609 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.536614 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.536619 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.536625 | orchestrator | 2026-03-28 00:57:03.536630 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-28 00:57:03.536635 | orchestrator | Saturday 28 March 2026 00:49:58 +0000 (0:00:01.567) 0:04:47.939 ******** 2026-03-28 00:57:03.536641 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.536650 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.536656 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.536661 | orchestrator | 2026-03-28 00:57:03.536667 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-28 00:57:03.536672 | orchestrator | Saturday 28 March 2026 00:49:58 +0000 (0:00:00.501) 0:04:48.441 ******** 2026-03-28 00:57:03.536678 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.536683 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.536688 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.536694 | orchestrator | 2026-03-28 00:57:03.536699 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-28 00:57:03.536704 | orchestrator | Saturday 28 March 2026 00:49:59 +0000 (0:00:01.029) 0:04:49.470 ******** 2026-03-28 00:57:03.536710 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.536715 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.536721 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.536726 | orchestrator | 2026-03-28 00:57:03.536731 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-28 00:57:03.536737 | orchestrator | Saturday 28 March 2026 00:50:00 +0000 (0:00:00.722) 0:04:50.193 ******** 2026-03-28 00:57:03.536742 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.536748 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.536753 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.536758 | orchestrator | 2026-03-28 00:57:03.536764 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-28 00:57:03.536770 | orchestrator | Saturday 28 March 2026 00:50:01 +0000 (0:00:01.088) 0:04:51.282 ******** 2026-03-28 00:57:03.536775 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.536780 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.536786 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.536791 | orchestrator | 2026-03-28 00:57:03.536796 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-28 00:57:03.536802 | orchestrator | Saturday 28 March 2026 00:50:02 +0000 (0:00:00.669) 0:04:51.951 ******** 2026-03-28 00:57:03.536807 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.536813 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.536818 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.536823 | orchestrator | 2026-03-28 00:57:03.536829 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-28 00:57:03.536834 | orchestrator | Saturday 28 March 2026 00:50:02 +0000 (0:00:00.743) 0:04:52.695 ******** 2026-03-28 00:57:03.536840 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.536845 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.536850 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.536856 | orchestrator | 2026-03-28 00:57:03.536861 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-28 00:57:03.536866 | orchestrator | Saturday 28 March 2026 00:50:04 +0000 (0:00:01.576) 0:04:54.271 ******** 2026-03-28 00:57:03.536872 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.536877 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.536882 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.536888 | orchestrator | 2026-03-28 00:57:03.536893 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-28 00:57:03.536899 | orchestrator | Saturday 28 March 2026 00:50:05 +0000 (0:00:01.428) 0:04:55.700 ******** 2026-03-28 00:57:03.536904 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.536909 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.536915 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.536920 | orchestrator | 2026-03-28 00:57:03.536925 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-28 00:57:03.536931 | orchestrator | Saturday 28 March 2026 00:50:06 +0000 (0:00:00.452) 0:04:56.152 ******** 2026-03-28 00:57:03.536936 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.536941 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.536954 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.536959 | orchestrator | 2026-03-28 00:57:03.536965 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-28 00:57:03.536970 | orchestrator | Saturday 28 March 2026 00:50:07 +0000 (0:00:01.328) 0:04:57.481 ******** 2026-03-28 00:57:03.536976 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.536981 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.536986 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.536992 | orchestrator | 2026-03-28 00:57:03.536997 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-28 00:57:03.537006 | orchestrator | Saturday 28 March 2026 00:50:08 +0000 (0:00:00.452) 0:04:57.934 ******** 2026-03-28 00:57:03.537012 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.537017 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.537023 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.537028 | orchestrator | 2026-03-28 00:57:03.537033 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-28 00:57:03.537039 | orchestrator | Saturday 28 March 2026 00:50:08 +0000 (0:00:00.433) 0:04:58.368 ******** 2026-03-28 00:57:03.537044 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.537050 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.537055 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.537060 | orchestrator | 2026-03-28 00:57:03.537069 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-28 00:57:03.537075 | orchestrator | Saturday 28 March 2026 00:50:08 +0000 (0:00:00.463) 0:04:58.831 ******** 2026-03-28 00:57:03.537080 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.537086 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.537091 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.537096 | orchestrator | 2026-03-28 00:57:03.537102 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-28 00:57:03.537107 | orchestrator | Saturday 28 March 2026 00:50:09 +0000 (0:00:00.804) 0:04:59.635 ******** 2026-03-28 00:57:03.537112 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.537118 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.537123 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.537129 | orchestrator | 2026-03-28 00:57:03.537134 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-28 00:57:03.537139 | orchestrator | Saturday 28 March 2026 00:50:10 +0000 (0:00:00.507) 0:05:00.143 ******** 2026-03-28 00:57:03.537145 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.537150 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.537156 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.537161 | orchestrator | 2026-03-28 00:57:03.537166 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-28 00:57:03.537172 | orchestrator | Saturday 28 March 2026 00:50:10 +0000 (0:00:00.671) 0:05:00.814 ******** 2026-03-28 00:57:03.537177 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.537182 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.537188 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.537193 | orchestrator | 2026-03-28 00:57:03.537198 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-28 00:57:03.537204 | orchestrator | Saturday 28 March 2026 00:50:11 +0000 (0:00:00.544) 0:05:01.359 ******** 2026-03-28 00:57:03.537209 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.537214 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.537220 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.537225 | orchestrator | 2026-03-28 00:57:03.537231 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-03-28 00:57:03.537236 | orchestrator | Saturday 28 March 2026 00:50:12 +0000 (0:00:00.949) 0:05:02.308 ******** 2026-03-28 00:57:03.537242 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.537247 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.537253 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.537258 | orchestrator | 2026-03-28 00:57:03.537264 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-03-28 00:57:03.537274 | orchestrator | Saturday 28 March 2026 00:50:12 +0000 (0:00:00.361) 0:05:02.669 ******** 2026-03-28 00:57:03.537279 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:03.537284 | orchestrator | 2026-03-28 00:57:03.537290 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-03-28 00:57:03.537295 | orchestrator | Saturday 28 March 2026 00:50:13 +0000 (0:00:00.992) 0:05:03.662 ******** 2026-03-28 00:57:03.537300 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.537306 | orchestrator | 2026-03-28 00:57:03.537311 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-03-28 00:57:03.537317 | orchestrator | Saturday 28 March 2026 00:50:14 +0000 (0:00:00.477) 0:05:04.140 ******** 2026-03-28 00:57:03.537322 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-28 00:57:03.537327 | orchestrator | 2026-03-28 00:57:03.537333 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-03-28 00:57:03.537338 | orchestrator | Saturday 28 March 2026 00:50:15 +0000 (0:00:01.232) 0:05:05.373 ******** 2026-03-28 00:57:03.537344 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.537349 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.537354 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.537360 | orchestrator | 2026-03-28 00:57:03.537365 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-03-28 00:57:03.537371 | orchestrator | Saturday 28 March 2026 00:50:16 +0000 (0:00:00.528) 0:05:05.901 ******** 2026-03-28 00:57:03.537376 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.537381 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.537387 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.537392 | orchestrator | 2026-03-28 00:57:03.537397 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-03-28 00:57:03.537403 | orchestrator | Saturday 28 March 2026 00:50:17 +0000 (0:00:01.042) 0:05:06.944 ******** 2026-03-28 00:57:03.537408 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:03.537414 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:03.537419 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:03.537424 | orchestrator | 2026-03-28 00:57:03.537430 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-03-28 00:57:03.537435 | orchestrator | Saturday 28 March 2026 00:50:18 +0000 (0:00:01.350) 0:05:08.295 ******** 2026-03-28 00:57:03.537440 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:03.537463 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:03.537469 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:03.537475 | orchestrator | 2026-03-28 00:57:03.537480 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-03-28 00:57:03.537486 | orchestrator | Saturday 28 March 2026 00:50:19 +0000 (0:00:01.296) 0:05:09.591 ******** 2026-03-28 00:57:03.537491 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:03.537497 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:03.537502 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:03.537507 | orchestrator | 2026-03-28 00:57:03.537516 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-03-28 00:57:03.537522 | orchestrator | Saturday 28 March 2026 00:50:20 +0000 (0:00:01.064) 0:05:10.655 ******** 2026-03-28 00:57:03.537527 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.537533 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.537538 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.537544 | orchestrator | 2026-03-28 00:57:03.537549 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-03-28 00:57:03.537554 | orchestrator | Saturday 28 March 2026 00:50:21 +0000 (0:00:00.968) 0:05:11.624 ******** 2026-03-28 00:57:03.537560 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:03.537565 | orchestrator | 2026-03-28 00:57:03.537571 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-03-28 00:57:03.537584 | orchestrator | Saturday 28 March 2026 00:50:23 +0000 (0:00:01.304) 0:05:12.929 ******** 2026-03-28 00:57:03.537590 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.537595 | orchestrator | 2026-03-28 00:57:03.537600 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-03-28 00:57:03.537606 | orchestrator | Saturday 28 March 2026 00:50:23 +0000 (0:00:00.864) 0:05:13.793 ******** 2026-03-28 00:57:03.537611 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-28 00:57:03.537617 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 00:57:03.537622 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 00:57:03.537627 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-28 00:57:03.537633 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-03-28 00:57:03.537638 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-28 00:57:03.537644 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-28 00:57:03.537649 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-03-28 00:57:03.537655 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-28 00:57:03.537660 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-03-28 00:57:03.537665 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-03-28 00:57:03.537671 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-03-28 00:57:03.537676 | orchestrator | 2026-03-28 00:57:03.537681 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-03-28 00:57:03.537687 | orchestrator | Saturday 28 March 2026 00:50:27 +0000 (0:00:03.739) 0:05:17.533 ******** 2026-03-28 00:57:03.537692 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:03.537698 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:03.537703 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:03.537708 | orchestrator | 2026-03-28 00:57:03.537714 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-03-28 00:57:03.537719 | orchestrator | Saturday 28 March 2026 00:50:29 +0000 (0:00:01.487) 0:05:19.021 ******** 2026-03-28 00:57:03.537725 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.537730 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.537736 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.537741 | orchestrator | 2026-03-28 00:57:03.537746 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-03-28 00:57:03.537752 | orchestrator | Saturday 28 March 2026 00:50:29 +0000 (0:00:00.342) 0:05:19.364 ******** 2026-03-28 00:57:03.537757 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.537762 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.537768 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.537773 | orchestrator | 2026-03-28 00:57:03.537779 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-03-28 00:57:03.537784 | orchestrator | Saturday 28 March 2026 00:50:29 +0000 (0:00:00.368) 0:05:19.732 ******** 2026-03-28 00:57:03.537789 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:03.537795 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:03.537800 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:03.537806 | orchestrator | 2026-03-28 00:57:03.537811 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-03-28 00:57:03.537816 | orchestrator | Saturday 28 March 2026 00:50:31 +0000 (0:00:01.929) 0:05:21.662 ******** 2026-03-28 00:57:03.537822 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:03.537827 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:03.537833 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:03.537838 | orchestrator | 2026-03-28 00:57:03.537843 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-03-28 00:57:03.537849 | orchestrator | Saturday 28 March 2026 00:50:33 +0000 (0:00:01.772) 0:05:23.434 ******** 2026-03-28 00:57:03.537854 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.537859 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.537873 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.537878 | orchestrator | 2026-03-28 00:57:03.537883 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-03-28 00:57:03.537889 | orchestrator | Saturday 28 March 2026 00:50:33 +0000 (0:00:00.354) 0:05:23.788 ******** 2026-03-28 00:57:03.537894 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:03.537900 | orchestrator | 2026-03-28 00:57:03.537905 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-03-28 00:57:03.537911 | orchestrator | Saturday 28 March 2026 00:50:34 +0000 (0:00:00.527) 0:05:24.316 ******** 2026-03-28 00:57:03.537916 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.537921 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.537926 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.537932 | orchestrator | 2026-03-28 00:57:03.537937 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-03-28 00:57:03.537943 | orchestrator | Saturday 28 March 2026 00:50:35 +0000 (0:00:00.612) 0:05:24.929 ******** 2026-03-28 00:57:03.537948 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.537953 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.537959 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.537964 | orchestrator | 2026-03-28 00:57:03.537969 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-03-28 00:57:03.537978 | orchestrator | Saturday 28 March 2026 00:50:35 +0000 (0:00:00.409) 0:05:25.338 ******** 2026-03-28 00:57:03.537984 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:03.537990 | orchestrator | 2026-03-28 00:57:03.537995 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-03-28 00:57:03.538001 | orchestrator | Saturday 28 March 2026 00:50:36 +0000 (0:00:00.579) 0:05:25.917 ******** 2026-03-28 00:57:03.538006 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:03.538012 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:03.538102 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:03.538108 | orchestrator | 2026-03-28 00:57:03.538117 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-03-28 00:57:03.538123 | orchestrator | Saturday 28 March 2026 00:50:38 +0000 (0:00:02.805) 0:05:28.723 ******** 2026-03-28 00:57:03.538129 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:03.538134 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:03.538139 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:03.538145 | orchestrator | 2026-03-28 00:57:03.538150 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-03-28 00:57:03.538156 | orchestrator | Saturday 28 March 2026 00:50:40 +0000 (0:00:01.252) 0:05:29.975 ******** 2026-03-28 00:57:03.538161 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:03.538166 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:03.538172 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:03.538177 | orchestrator | 2026-03-28 00:57:03.538183 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-03-28 00:57:03.538188 | orchestrator | Saturday 28 March 2026 00:50:41 +0000 (0:00:01.884) 0:05:31.860 ******** 2026-03-28 00:57:03.538194 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:03.538199 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:03.538204 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:03.538210 | orchestrator | 2026-03-28 00:57:03.538215 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-03-28 00:57:03.538220 | orchestrator | Saturday 28 March 2026 00:50:44 +0000 (0:00:02.131) 0:05:33.991 ******** 2026-03-28 00:57:03.538226 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:03.538231 | orchestrator | 2026-03-28 00:57:03.538237 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-03-28 00:57:03.538247 | orchestrator | Saturday 28 March 2026 00:50:44 +0000 (0:00:00.807) 0:05:34.799 ******** 2026-03-28 00:57:03.538252 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-03-28 00:57:03.538258 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.538263 | orchestrator | 2026-03-28 00:57:03.538269 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-03-28 00:57:03.538274 | orchestrator | Saturday 28 March 2026 00:51:06 +0000 (0:00:21.458) 0:05:56.258 ******** 2026-03-28 00:57:03.538279 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.538285 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.538290 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.538295 | orchestrator | 2026-03-28 00:57:03.538301 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-03-28 00:57:03.538306 | orchestrator | Saturday 28 March 2026 00:51:12 +0000 (0:00:06.463) 0:06:02.721 ******** 2026-03-28 00:57:03.538312 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.538317 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.538322 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.538328 | orchestrator | 2026-03-28 00:57:03.538333 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-03-28 00:57:03.538339 | orchestrator | Saturday 28 March 2026 00:51:13 +0000 (0:00:00.478) 0:06:03.200 ******** 2026-03-28 00:57:03.538346 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d1860ec84dcbf216f76392b245210ac710f91236'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-28 00:57:03.538353 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d1860ec84dcbf216f76392b245210ac710f91236'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-28 00:57:03.538360 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d1860ec84dcbf216f76392b245210ac710f91236'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-28 00:57:03.538368 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d1860ec84dcbf216f76392b245210ac710f91236'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-28 00:57:03.538393 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d1860ec84dcbf216f76392b245210ac710f91236'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-28 00:57:03.538403 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d1860ec84dcbf216f76392b245210ac710f91236'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__d1860ec84dcbf216f76392b245210ac710f91236'}])  2026-03-28 00:57:03.538411 | orchestrator | 2026-03-28 00:57:03.538416 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-28 00:57:03.538426 | orchestrator | Saturday 28 March 2026 00:51:24 +0000 (0:00:11.262) 0:06:14.463 ******** 2026-03-28 00:57:03.538431 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.538436 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.538442 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.538465 | orchestrator | 2026-03-28 00:57:03.538475 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-28 00:57:03.538484 | orchestrator | Saturday 28 March 2026 00:51:24 +0000 (0:00:00.360) 0:06:14.824 ******** 2026-03-28 00:57:03.538493 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-1, testbed-node-0, testbed-node-2 2026-03-28 00:57:03.538503 | orchestrator | 2026-03-28 00:57:03.538508 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-28 00:57:03.538514 | orchestrator | Saturday 28 March 2026 00:51:25 +0000 (0:00:00.851) 0:06:15.676 ******** 2026-03-28 00:57:03.538519 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.538524 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.538530 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.538535 | orchestrator | 2026-03-28 00:57:03.538540 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-28 00:57:03.538546 | orchestrator | Saturday 28 March 2026 00:51:26 +0000 (0:00:00.478) 0:06:16.154 ******** 2026-03-28 00:57:03.538551 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.538556 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.538562 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.538567 | orchestrator | 2026-03-28 00:57:03.538572 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-28 00:57:03.538578 | orchestrator | Saturday 28 March 2026 00:51:26 +0000 (0:00:00.406) 0:06:16.561 ******** 2026-03-28 00:57:03.538583 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-28 00:57:03.538589 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-28 00:57:03.538594 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-28 00:57:03.538599 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.538605 | orchestrator | 2026-03-28 00:57:03.538610 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-28 00:57:03.538615 | orchestrator | Saturday 28 March 2026 00:51:27 +0000 (0:00:00.648) 0:06:17.209 ******** 2026-03-28 00:57:03.538621 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.538626 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.538631 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.538637 | orchestrator | 2026-03-28 00:57:03.538642 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-03-28 00:57:03.538647 | orchestrator | 2026-03-28 00:57:03.538653 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-28 00:57:03.538658 | orchestrator | Saturday 28 March 2026 00:51:28 +0000 (0:00:00.847) 0:06:18.057 ******** 2026-03-28 00:57:03.538663 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-1, testbed-node-2, testbed-node-0 2026-03-28 00:57:03.538669 | orchestrator | 2026-03-28 00:57:03.538674 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-28 00:57:03.538680 | orchestrator | Saturday 28 March 2026 00:51:28 +0000 (0:00:00.574) 0:06:18.631 ******** 2026-03-28 00:57:03.538685 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:03.538690 | orchestrator | 2026-03-28 00:57:03.538696 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-28 00:57:03.538701 | orchestrator | Saturday 28 March 2026 00:51:29 +0000 (0:00:00.764) 0:06:19.396 ******** 2026-03-28 00:57:03.538706 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.538712 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.538717 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.538722 | orchestrator | 2026-03-28 00:57:03.538733 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-28 00:57:03.538738 | orchestrator | Saturday 28 March 2026 00:51:30 +0000 (0:00:00.854) 0:06:20.250 ******** 2026-03-28 00:57:03.538744 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.538749 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.538754 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.538759 | orchestrator | 2026-03-28 00:57:03.538765 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-28 00:57:03.538770 | orchestrator | Saturday 28 March 2026 00:51:30 +0000 (0:00:00.393) 0:06:20.644 ******** 2026-03-28 00:57:03.538776 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.538781 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.538787 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.538792 | orchestrator | 2026-03-28 00:57:03.538815 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-28 00:57:03.538822 | orchestrator | Saturday 28 March 2026 00:51:31 +0000 (0:00:00.299) 0:06:20.944 ******** 2026-03-28 00:57:03.538827 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.538832 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.538838 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.538843 | orchestrator | 2026-03-28 00:57:03.538849 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-28 00:57:03.538854 | orchestrator | Saturday 28 March 2026 00:51:31 +0000 (0:00:00.305) 0:06:21.250 ******** 2026-03-28 00:57:03.538859 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.538865 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.538874 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.538879 | orchestrator | 2026-03-28 00:57:03.538885 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-28 00:57:03.538891 | orchestrator | Saturday 28 March 2026 00:51:32 +0000 (0:00:01.125) 0:06:22.375 ******** 2026-03-28 00:57:03.538896 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.538901 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.538907 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.538912 | orchestrator | 2026-03-28 00:57:03.538917 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-28 00:57:03.538923 | orchestrator | Saturday 28 March 2026 00:51:32 +0000 (0:00:00.369) 0:06:22.745 ******** 2026-03-28 00:57:03.538928 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.538934 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.538939 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.538944 | orchestrator | 2026-03-28 00:57:03.538950 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-28 00:57:03.538955 | orchestrator | Saturday 28 March 2026 00:51:33 +0000 (0:00:00.420) 0:06:23.165 ******** 2026-03-28 00:57:03.538961 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.538966 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.538971 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.538977 | orchestrator | 2026-03-28 00:57:03.538982 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-28 00:57:03.538988 | orchestrator | Saturday 28 March 2026 00:51:34 +0000 (0:00:00.790) 0:06:23.955 ******** 2026-03-28 00:57:03.538993 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.538999 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.539004 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.539009 | orchestrator | 2026-03-28 00:57:03.539015 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-28 00:57:03.539020 | orchestrator | Saturday 28 March 2026 00:51:35 +0000 (0:00:01.161) 0:06:25.117 ******** 2026-03-28 00:57:03.539026 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.539031 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.539037 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.539042 | orchestrator | 2026-03-28 00:57:03.539048 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-28 00:57:03.539068 | orchestrator | Saturday 28 March 2026 00:51:35 +0000 (0:00:00.370) 0:06:25.488 ******** 2026-03-28 00:57:03.539073 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.539079 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.539084 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.539089 | orchestrator | 2026-03-28 00:57:03.539095 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-28 00:57:03.539100 | orchestrator | Saturday 28 March 2026 00:51:35 +0000 (0:00:00.335) 0:06:25.824 ******** 2026-03-28 00:57:03.539105 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.539111 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.539116 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.539121 | orchestrator | 2026-03-28 00:57:03.539127 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-28 00:57:03.539132 | orchestrator | Saturday 28 March 2026 00:51:36 +0000 (0:00:00.311) 0:06:26.135 ******** 2026-03-28 00:57:03.539137 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.539143 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.539148 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.539153 | orchestrator | 2026-03-28 00:57:03.539159 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-28 00:57:03.539164 | orchestrator | Saturday 28 March 2026 00:51:36 +0000 (0:00:00.594) 0:06:26.730 ******** 2026-03-28 00:57:03.539169 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.539175 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.539180 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.539185 | orchestrator | 2026-03-28 00:57:03.539191 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-28 00:57:03.539196 | orchestrator | Saturday 28 March 2026 00:51:37 +0000 (0:00:00.343) 0:06:27.074 ******** 2026-03-28 00:57:03.539201 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.539207 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.539212 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.539217 | orchestrator | 2026-03-28 00:57:03.539223 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-28 00:57:03.539228 | orchestrator | Saturday 28 March 2026 00:51:37 +0000 (0:00:00.325) 0:06:27.399 ******** 2026-03-28 00:57:03.539233 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.539239 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.539244 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.539249 | orchestrator | 2026-03-28 00:57:03.539255 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-28 00:57:03.539260 | orchestrator | Saturday 28 March 2026 00:51:37 +0000 (0:00:00.287) 0:06:27.686 ******** 2026-03-28 00:57:03.539265 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.539271 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.539276 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.539281 | orchestrator | 2026-03-28 00:57:03.539287 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-28 00:57:03.539292 | orchestrator | Saturday 28 March 2026 00:51:38 +0000 (0:00:00.875) 0:06:28.562 ******** 2026-03-28 00:57:03.539297 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.539303 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.539308 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.539313 | orchestrator | 2026-03-28 00:57:03.539319 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-28 00:57:03.539339 | orchestrator | Saturday 28 March 2026 00:51:39 +0000 (0:00:00.335) 0:06:28.897 ******** 2026-03-28 00:57:03.539345 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.539350 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.539355 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.539361 | orchestrator | 2026-03-28 00:57:03.539366 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-28 00:57:03.539371 | orchestrator | Saturday 28 March 2026 00:51:39 +0000 (0:00:00.571) 0:06:29.469 ******** 2026-03-28 00:57:03.539382 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-28 00:57:03.539387 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 00:57:03.539396 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 00:57:03.539402 | orchestrator | 2026-03-28 00:57:03.539407 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-28 00:57:03.539412 | orchestrator | Saturday 28 March 2026 00:51:40 +0000 (0:00:01.002) 0:06:30.471 ******** 2026-03-28 00:57:03.539418 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:03.539423 | orchestrator | 2026-03-28 00:57:03.539428 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-03-28 00:57:03.539434 | orchestrator | Saturday 28 March 2026 00:51:41 +0000 (0:00:00.816) 0:06:31.288 ******** 2026-03-28 00:57:03.539439 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:03.539480 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:03.539487 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:03.539493 | orchestrator | 2026-03-28 00:57:03.539498 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-03-28 00:57:03.539503 | orchestrator | Saturday 28 March 2026 00:51:42 +0000 (0:00:00.896) 0:06:32.185 ******** 2026-03-28 00:57:03.539509 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.539514 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.539519 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.539525 | orchestrator | 2026-03-28 00:57:03.539530 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-03-28 00:57:03.539536 | orchestrator | Saturday 28 March 2026 00:51:42 +0000 (0:00:00.362) 0:06:32.548 ******** 2026-03-28 00:57:03.539541 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-28 00:57:03.539547 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-28 00:57:03.539552 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-28 00:57:03.539557 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-03-28 00:57:03.539563 | orchestrator | 2026-03-28 00:57:03.539568 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-03-28 00:57:03.539574 | orchestrator | Saturday 28 March 2026 00:51:51 +0000 (0:00:08.855) 0:06:41.403 ******** 2026-03-28 00:57:03.539579 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.539584 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.539590 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.539595 | orchestrator | 2026-03-28 00:57:03.539600 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-03-28 00:57:03.539606 | orchestrator | Saturday 28 March 2026 00:51:52 +0000 (0:00:00.751) 0:06:42.155 ******** 2026-03-28 00:57:03.539611 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-28 00:57:03.539616 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-28 00:57:03.539622 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-28 00:57:03.539627 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-28 00:57:03.539633 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 00:57:03.539638 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 00:57:03.539643 | orchestrator | 2026-03-28 00:57:03.539649 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-03-28 00:57:03.539654 | orchestrator | Saturday 28 March 2026 00:51:54 +0000 (0:00:02.471) 0:06:44.627 ******** 2026-03-28 00:57:03.539659 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-28 00:57:03.539665 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-28 00:57:03.539670 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-28 00:57:03.539676 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-28 00:57:03.539681 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-28 00:57:03.539694 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-28 00:57:03.539699 | orchestrator | 2026-03-28 00:57:03.539705 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-03-28 00:57:03.539710 | orchestrator | Saturday 28 March 2026 00:51:56 +0000 (0:00:01.453) 0:06:46.080 ******** 2026-03-28 00:57:03.539715 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.539721 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.539726 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.539731 | orchestrator | 2026-03-28 00:57:03.539737 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-03-28 00:57:03.539742 | orchestrator | Saturday 28 March 2026 00:51:57 +0000 (0:00:00.987) 0:06:47.067 ******** 2026-03-28 00:57:03.539748 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.539753 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.539758 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.539764 | orchestrator | 2026-03-28 00:57:03.539769 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-28 00:57:03.539774 | orchestrator | Saturday 28 March 2026 00:51:57 +0000 (0:00:00.590) 0:06:47.658 ******** 2026-03-28 00:57:03.539780 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.539785 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.539790 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.539796 | orchestrator | 2026-03-28 00:57:03.539801 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-28 00:57:03.539807 | orchestrator | Saturday 28 March 2026 00:51:58 +0000 (0:00:00.341) 0:06:47.999 ******** 2026-03-28 00:57:03.539829 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:03.539835 | orchestrator | 2026-03-28 00:57:03.539841 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-03-28 00:57:03.539846 | orchestrator | Saturday 28 March 2026 00:51:58 +0000 (0:00:00.593) 0:06:48.593 ******** 2026-03-28 00:57:03.539851 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.539857 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.539862 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.539867 | orchestrator | 2026-03-28 00:57:03.539873 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-03-28 00:57:03.539882 | orchestrator | Saturday 28 March 2026 00:51:59 +0000 (0:00:00.457) 0:06:49.050 ******** 2026-03-28 00:57:03.539887 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.539893 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.539898 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.539903 | orchestrator | 2026-03-28 00:57:03.539908 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-03-28 00:57:03.539913 | orchestrator | Saturday 28 March 2026 00:51:59 +0000 (0:00:00.614) 0:06:49.665 ******** 2026-03-28 00:57:03.539917 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-1, testbed-node-2, testbed-node-0 2026-03-28 00:57:03.539922 | orchestrator | 2026-03-28 00:57:03.539927 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-03-28 00:57:03.539931 | orchestrator | Saturday 28 March 2026 00:52:00 +0000 (0:00:00.586) 0:06:50.251 ******** 2026-03-28 00:57:03.539936 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:03.539941 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:03.539946 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:03.539951 | orchestrator | 2026-03-28 00:57:03.539955 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-03-28 00:57:03.539960 | orchestrator | Saturday 28 March 2026 00:52:01 +0000 (0:00:01.329) 0:06:51.580 ******** 2026-03-28 00:57:03.539965 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:03.539969 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:03.539974 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:03.539979 | orchestrator | 2026-03-28 00:57:03.539984 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-03-28 00:57:03.539992 | orchestrator | Saturday 28 March 2026 00:52:03 +0000 (0:00:01.965) 0:06:53.546 ******** 2026-03-28 00:57:03.539997 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:03.540001 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:03.540006 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:03.540011 | orchestrator | 2026-03-28 00:57:03.540016 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-03-28 00:57:03.540020 | orchestrator | Saturday 28 March 2026 00:52:05 +0000 (0:00:02.113) 0:06:55.660 ******** 2026-03-28 00:57:03.540025 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:03.540030 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:03.540035 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:03.540039 | orchestrator | 2026-03-28 00:57:03.540044 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-28 00:57:03.540049 | orchestrator | Saturday 28 March 2026 00:52:08 +0000 (0:00:02.274) 0:06:57.934 ******** 2026-03-28 00:57:03.540054 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.540058 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.540063 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-03-28 00:57:03.540068 | orchestrator | 2026-03-28 00:57:03.540072 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-03-28 00:57:03.540077 | orchestrator | Saturday 28 March 2026 00:52:08 +0000 (0:00:00.461) 0:06:58.395 ******** 2026-03-28 00:57:03.540082 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-03-28 00:57:03.540087 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-03-28 00:57:03.540092 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-28 00:57:03.540096 | orchestrator | 2026-03-28 00:57:03.540101 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-03-28 00:57:03.540106 | orchestrator | Saturday 28 March 2026 00:52:22 +0000 (0:00:13.584) 0:07:11.979 ******** 2026-03-28 00:57:03.540111 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-28 00:57:03.540116 | orchestrator | 2026-03-28 00:57:03.540120 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-03-28 00:57:03.540125 | orchestrator | Saturday 28 March 2026 00:52:23 +0000 (0:00:01.377) 0:07:13.357 ******** 2026-03-28 00:57:03.540130 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.540134 | orchestrator | 2026-03-28 00:57:03.540139 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-03-28 00:57:03.540144 | orchestrator | Saturday 28 March 2026 00:52:23 +0000 (0:00:00.333) 0:07:13.691 ******** 2026-03-28 00:57:03.540149 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.540154 | orchestrator | 2026-03-28 00:57:03.540158 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-03-28 00:57:03.540163 | orchestrator | Saturday 28 March 2026 00:52:23 +0000 (0:00:00.154) 0:07:13.845 ******** 2026-03-28 00:57:03.540168 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-03-28 00:57:03.540173 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-03-28 00:57:03.540178 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-03-28 00:57:03.540182 | orchestrator | 2026-03-28 00:57:03.540187 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-03-28 00:57:03.540192 | orchestrator | Saturday 28 March 2026 00:52:30 +0000 (0:00:06.621) 0:07:20.466 ******** 2026-03-28 00:57:03.540196 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-03-28 00:57:03.540215 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-03-28 00:57:03.540221 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-03-28 00:57:03.540229 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-03-28 00:57:03.540234 | orchestrator | 2026-03-28 00:57:03.540239 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-28 00:57:03.540243 | orchestrator | Saturday 28 March 2026 00:52:35 +0000 (0:00:04.764) 0:07:25.231 ******** 2026-03-28 00:57:03.540248 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:03.540253 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:03.540258 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:03.540262 | orchestrator | 2026-03-28 00:57:03.540271 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-28 00:57:03.540276 | orchestrator | Saturday 28 March 2026 00:52:36 +0000 (0:00:00.970) 0:07:26.201 ******** 2026-03-28 00:57:03.540281 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:03.540285 | orchestrator | 2026-03-28 00:57:03.540290 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-28 00:57:03.540295 | orchestrator | Saturday 28 March 2026 00:52:36 +0000 (0:00:00.577) 0:07:26.779 ******** 2026-03-28 00:57:03.540299 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.540304 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.540309 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.540314 | orchestrator | 2026-03-28 00:57:03.540318 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-28 00:57:03.540323 | orchestrator | Saturday 28 March 2026 00:52:37 +0000 (0:00:00.331) 0:07:27.110 ******** 2026-03-28 00:57:03.540328 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:03.540333 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:03.540337 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:03.540342 | orchestrator | 2026-03-28 00:57:03.540347 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-28 00:57:03.540352 | orchestrator | Saturday 28 March 2026 00:52:38 +0000 (0:00:01.625) 0:07:28.735 ******** 2026-03-28 00:57:03.540357 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-28 00:57:03.540362 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-28 00:57:03.540366 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-28 00:57:03.540371 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.540376 | orchestrator | 2026-03-28 00:57:03.540381 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-28 00:57:03.540386 | orchestrator | Saturday 28 March 2026 00:52:39 +0000 (0:00:00.668) 0:07:29.404 ******** 2026-03-28 00:57:03.540390 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.540395 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.540400 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.540404 | orchestrator | 2026-03-28 00:57:03.540409 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-03-28 00:57:03.540414 | orchestrator | 2026-03-28 00:57:03.540419 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-28 00:57:03.540424 | orchestrator | Saturday 28 March 2026 00:52:40 +0000 (0:00:00.672) 0:07:30.076 ******** 2026-03-28 00:57:03.540428 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:57:03.540433 | orchestrator | 2026-03-28 00:57:03.540438 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-28 00:57:03.540443 | orchestrator | Saturday 28 March 2026 00:52:41 +0000 (0:00:00.838) 0:07:30.915 ******** 2026-03-28 00:57:03.540462 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:57:03.540467 | orchestrator | 2026-03-28 00:57:03.540472 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-28 00:57:03.540476 | orchestrator | Saturday 28 March 2026 00:52:41 +0000 (0:00:00.559) 0:07:31.475 ******** 2026-03-28 00:57:03.540481 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.540490 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.540495 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.540500 | orchestrator | 2026-03-28 00:57:03.540504 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-28 00:57:03.540509 | orchestrator | Saturday 28 March 2026 00:52:41 +0000 (0:00:00.348) 0:07:31.823 ******** 2026-03-28 00:57:03.540514 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.540519 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.540523 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.540528 | orchestrator | 2026-03-28 00:57:03.540533 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-28 00:57:03.540538 | orchestrator | Saturday 28 March 2026 00:52:43 +0000 (0:00:01.057) 0:07:32.880 ******** 2026-03-28 00:57:03.540543 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.540547 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.540552 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.540557 | orchestrator | 2026-03-28 00:57:03.540562 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-28 00:57:03.540567 | orchestrator | Saturday 28 March 2026 00:52:43 +0000 (0:00:00.761) 0:07:33.642 ******** 2026-03-28 00:57:03.540571 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.540576 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.540581 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.540585 | orchestrator | 2026-03-28 00:57:03.540590 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-28 00:57:03.540595 | orchestrator | Saturday 28 March 2026 00:52:44 +0000 (0:00:00.791) 0:07:34.433 ******** 2026-03-28 00:57:03.540600 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.540604 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.540609 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.540614 | orchestrator | 2026-03-28 00:57:03.540619 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-28 00:57:03.540624 | orchestrator | Saturday 28 March 2026 00:52:44 +0000 (0:00:00.309) 0:07:34.742 ******** 2026-03-28 00:57:03.540631 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.540636 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.540641 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.540646 | orchestrator | 2026-03-28 00:57:03.540650 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-28 00:57:03.540655 | orchestrator | Saturday 28 March 2026 00:52:45 +0000 (0:00:00.591) 0:07:35.334 ******** 2026-03-28 00:57:03.540660 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.540665 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.540669 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.540674 | orchestrator | 2026-03-28 00:57:03.540679 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-28 00:57:03.540687 | orchestrator | Saturday 28 March 2026 00:52:45 +0000 (0:00:00.304) 0:07:35.638 ******** 2026-03-28 00:57:03.540692 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.540697 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.540701 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.540706 | orchestrator | 2026-03-28 00:57:03.540711 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-28 00:57:03.540715 | orchestrator | Saturday 28 March 2026 00:52:46 +0000 (0:00:00.772) 0:07:36.410 ******** 2026-03-28 00:57:03.540720 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.540725 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.540729 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.540734 | orchestrator | 2026-03-28 00:57:03.540739 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-28 00:57:03.540744 | orchestrator | Saturday 28 March 2026 00:52:47 +0000 (0:00:00.761) 0:07:37.171 ******** 2026-03-28 00:57:03.540748 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.540753 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.540758 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.540766 | orchestrator | 2026-03-28 00:57:03.540771 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-28 00:57:03.540776 | orchestrator | Saturday 28 March 2026 00:52:47 +0000 (0:00:00.627) 0:07:37.798 ******** 2026-03-28 00:57:03.540781 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.540785 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.540790 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.540795 | orchestrator | 2026-03-28 00:57:03.540800 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-28 00:57:03.540804 | orchestrator | Saturday 28 March 2026 00:52:48 +0000 (0:00:00.380) 0:07:38.179 ******** 2026-03-28 00:57:03.540809 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.540814 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.540819 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.540823 | orchestrator | 2026-03-28 00:57:03.540828 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-28 00:57:03.540833 | orchestrator | Saturday 28 March 2026 00:52:48 +0000 (0:00:00.436) 0:07:38.616 ******** 2026-03-28 00:57:03.540837 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.540842 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.540847 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.540852 | orchestrator | 2026-03-28 00:57:03.540856 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-28 00:57:03.540861 | orchestrator | Saturday 28 March 2026 00:52:49 +0000 (0:00:00.381) 0:07:38.997 ******** 2026-03-28 00:57:03.540866 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.540870 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.540875 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.540880 | orchestrator | 2026-03-28 00:57:03.540885 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-28 00:57:03.540889 | orchestrator | Saturday 28 March 2026 00:52:49 +0000 (0:00:00.750) 0:07:39.747 ******** 2026-03-28 00:57:03.540894 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.540899 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.540904 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.540908 | orchestrator | 2026-03-28 00:57:03.540913 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-28 00:57:03.540918 | orchestrator | Saturday 28 March 2026 00:52:50 +0000 (0:00:00.347) 0:07:40.095 ******** 2026-03-28 00:57:03.540923 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.540927 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.540932 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.540937 | orchestrator | 2026-03-28 00:57:03.540942 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-28 00:57:03.540946 | orchestrator | Saturday 28 March 2026 00:52:51 +0000 (0:00:01.041) 0:07:41.137 ******** 2026-03-28 00:57:03.540951 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.540956 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.540961 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.540965 | orchestrator | 2026-03-28 00:57:03.540970 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-28 00:57:03.540975 | orchestrator | Saturday 28 March 2026 00:52:51 +0000 (0:00:00.326) 0:07:41.464 ******** 2026-03-28 00:57:03.540979 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.540984 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.540989 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.540994 | orchestrator | 2026-03-28 00:57:03.540998 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-28 00:57:03.541003 | orchestrator | Saturday 28 March 2026 00:52:52 +0000 (0:00:00.714) 0:07:42.178 ******** 2026-03-28 00:57:03.541008 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.541012 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.541017 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.541022 | orchestrator | 2026-03-28 00:57:03.541027 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-03-28 00:57:03.541036 | orchestrator | Saturday 28 March 2026 00:52:52 +0000 (0:00:00.599) 0:07:42.778 ******** 2026-03-28 00:57:03.541041 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.541046 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.541050 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.541055 | orchestrator | 2026-03-28 00:57:03.541060 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-03-28 00:57:03.541065 | orchestrator | Saturday 28 March 2026 00:52:53 +0000 (0:00:00.498) 0:07:43.276 ******** 2026-03-28 00:57:03.541069 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 00:57:03.541079 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 00:57:03.541083 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 00:57:03.541088 | orchestrator | 2026-03-28 00:57:03.541093 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-03-28 00:57:03.541098 | orchestrator | Saturday 28 March 2026 00:52:54 +0000 (0:00:01.033) 0:07:44.310 ******** 2026-03-28 00:57:03.541103 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:57:03.541107 | orchestrator | 2026-03-28 00:57:03.541115 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-03-28 00:57:03.541120 | orchestrator | Saturday 28 March 2026 00:52:55 +0000 (0:00:00.905) 0:07:45.215 ******** 2026-03-28 00:57:03.541125 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.541129 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.541134 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.541139 | orchestrator | 2026-03-28 00:57:03.541144 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-03-28 00:57:03.541149 | orchestrator | Saturday 28 March 2026 00:52:55 +0000 (0:00:00.311) 0:07:45.527 ******** 2026-03-28 00:57:03.541154 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.541158 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.541163 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.541168 | orchestrator | 2026-03-28 00:57:03.541172 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-03-28 00:57:03.541177 | orchestrator | Saturday 28 March 2026 00:52:55 +0000 (0:00:00.332) 0:07:45.860 ******** 2026-03-28 00:57:03.541182 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.541187 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.541191 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.541196 | orchestrator | 2026-03-28 00:57:03.541201 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-03-28 00:57:03.541206 | orchestrator | Saturday 28 March 2026 00:52:57 +0000 (0:00:01.034) 0:07:46.895 ******** 2026-03-28 00:57:03.541210 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.541215 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.541220 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.541224 | orchestrator | 2026-03-28 00:57:03.541229 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-03-28 00:57:03.541234 | orchestrator | Saturday 28 March 2026 00:52:57 +0000 (0:00:00.418) 0:07:47.314 ******** 2026-03-28 00:57:03.541239 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-28 00:57:03.541244 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-28 00:57:03.541248 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-28 00:57:03.541253 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-28 00:57:03.541274 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-28 00:57:03.541279 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-28 00:57:03.541288 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-28 00:57:03.541293 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-28 00:57:03.541297 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-28 00:57:03.541302 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-28 00:57:03.541307 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-28 00:57:03.541311 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-28 00:57:03.541316 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-28 00:57:03.541321 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-28 00:57:03.541326 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-28 00:57:03.541330 | orchestrator | 2026-03-28 00:57:03.541335 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-03-28 00:57:03.541340 | orchestrator | Saturday 28 March 2026 00:53:01 +0000 (0:00:04.391) 0:07:51.705 ******** 2026-03-28 00:57:03.541344 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.541349 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.541354 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.541359 | orchestrator | 2026-03-28 00:57:03.541364 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-03-28 00:57:03.541368 | orchestrator | Saturday 28 March 2026 00:53:02 +0000 (0:00:00.321) 0:07:52.027 ******** 2026-03-28 00:57:03.541373 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:57:03.541378 | orchestrator | 2026-03-28 00:57:03.541383 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-03-28 00:57:03.541388 | orchestrator | Saturday 28 March 2026 00:53:02 +0000 (0:00:00.825) 0:07:52.853 ******** 2026-03-28 00:57:03.541393 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-28 00:57:03.541397 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-28 00:57:03.541402 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-28 00:57:03.541407 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-03-28 00:57:03.541415 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-03-28 00:57:03.541420 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-03-28 00:57:03.541425 | orchestrator | 2026-03-28 00:57:03.541430 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-03-28 00:57:03.541435 | orchestrator | Saturday 28 March 2026 00:53:04 +0000 (0:00:01.077) 0:07:53.930 ******** 2026-03-28 00:57:03.541439 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 00:57:03.541456 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-28 00:57:03.541464 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-28 00:57:03.541469 | orchestrator | 2026-03-28 00:57:03.541474 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-03-28 00:57:03.541479 | orchestrator | Saturday 28 March 2026 00:53:05 +0000 (0:00:01.800) 0:07:55.731 ******** 2026-03-28 00:57:03.541483 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-28 00:57:03.541488 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-28 00:57:03.541493 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:57:03.541498 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-28 00:57:03.541503 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-28 00:57:03.541507 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:57:03.541512 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-28 00:57:03.541521 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-28 00:57:03.541526 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:57:03.541530 | orchestrator | 2026-03-28 00:57:03.541535 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-03-28 00:57:03.541540 | orchestrator | Saturday 28 March 2026 00:53:07 +0000 (0:00:01.488) 0:07:57.219 ******** 2026-03-28 00:57:03.541544 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-28 00:57:03.541549 | orchestrator | 2026-03-28 00:57:03.541554 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-03-28 00:57:03.541559 | orchestrator | Saturday 28 March 2026 00:53:09 +0000 (0:00:02.043) 0:07:59.262 ******** 2026-03-28 00:57:03.541563 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:57:03.541568 | orchestrator | 2026-03-28 00:57:03.541573 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-03-28 00:57:03.541577 | orchestrator | Saturday 28 March 2026 00:53:10 +0000 (0:00:00.624) 0:07:59.887 ******** 2026-03-28 00:57:03.541582 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a9825c53-ea63-5cae-a5f7-e494f125bb8e', 'data_vg': 'ceph-a9825c53-ea63-5cae-a5f7-e494f125bb8e'}) 2026-03-28 00:57:03.541589 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-95774a3e-10f2-5c5c-866d-eaa2f6123896', 'data_vg': 'ceph-95774a3e-10f2-5c5c-866d-eaa2f6123896'}) 2026-03-28 00:57:03.541593 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-3eb28a65-49e9-527a-93b6-39f945444b2a', 'data_vg': 'ceph-3eb28a65-49e9-527a-93b6-39f945444b2a'}) 2026-03-28 00:57:03.541598 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-8fa92e37-9e8f-5bc1-86de-5e52e5346f3d', 'data_vg': 'ceph-8fa92e37-9e8f-5bc1-86de-5e52e5346f3d'}) 2026-03-28 00:57:03.541603 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-6126976c-050b-5515-8c81-fb3ee245975b', 'data_vg': 'ceph-6126976c-050b-5515-8c81-fb3ee245975b'}) 2026-03-28 00:57:03.541608 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-8c246942-827f-54a7-8a08-735105fd2fd0', 'data_vg': 'ceph-8c246942-827f-54a7-8a08-735105fd2fd0'}) 2026-03-28 00:57:03.541612 | orchestrator | 2026-03-28 00:57:03.541617 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-03-28 00:57:03.541622 | orchestrator | Saturday 28 March 2026 00:53:51 +0000 (0:00:41.405) 0:08:41.292 ******** 2026-03-28 00:57:03.541626 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.541631 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.541636 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.541641 | orchestrator | 2026-03-28 00:57:03.541645 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-03-28 00:57:03.541650 | orchestrator | Saturday 28 March 2026 00:53:52 +0000 (0:00:00.700) 0:08:41.993 ******** 2026-03-28 00:57:03.541655 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:57:03.541660 | orchestrator | 2026-03-28 00:57:03.541665 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-03-28 00:57:03.541669 | orchestrator | Saturday 28 March 2026 00:53:52 +0000 (0:00:00.588) 0:08:42.581 ******** 2026-03-28 00:57:03.541674 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.541679 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.541684 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.541688 | orchestrator | 2026-03-28 00:57:03.541693 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-03-28 00:57:03.541698 | orchestrator | Saturday 28 March 2026 00:53:53 +0000 (0:00:00.743) 0:08:43.324 ******** 2026-03-28 00:57:03.541703 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.541707 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.541712 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.541717 | orchestrator | 2026-03-28 00:57:03.541722 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-03-28 00:57:03.541730 | orchestrator | Saturday 28 March 2026 00:53:55 +0000 (0:00:01.906) 0:08:45.231 ******** 2026-03-28 00:57:03.541735 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:57:03.541739 | orchestrator | 2026-03-28 00:57:03.541746 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-03-28 00:57:03.541752 | orchestrator | Saturday 28 March 2026 00:53:55 +0000 (0:00:00.613) 0:08:45.844 ******** 2026-03-28 00:57:03.541756 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:57:03.541761 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:57:03.541766 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:57:03.541771 | orchestrator | 2026-03-28 00:57:03.541775 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-03-28 00:57:03.541780 | orchestrator | Saturday 28 March 2026 00:53:57 +0000 (0:00:01.232) 0:08:47.076 ******** 2026-03-28 00:57:03.541785 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:57:03.541792 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:57:03.541797 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:57:03.541802 | orchestrator | 2026-03-28 00:57:03.541807 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-03-28 00:57:03.541812 | orchestrator | Saturday 28 March 2026 00:53:58 +0000 (0:00:01.588) 0:08:48.665 ******** 2026-03-28 00:57:03.541816 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:57:03.541821 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:57:03.541826 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:57:03.541831 | orchestrator | 2026-03-28 00:57:03.541835 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-03-28 00:57:03.541840 | orchestrator | Saturday 28 March 2026 00:54:00 +0000 (0:00:01.794) 0:08:50.460 ******** 2026-03-28 00:57:03.541845 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.541849 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.541854 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.541859 | orchestrator | 2026-03-28 00:57:03.541864 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-03-28 00:57:03.541868 | orchestrator | Saturday 28 March 2026 00:54:00 +0000 (0:00:00.346) 0:08:50.806 ******** 2026-03-28 00:57:03.541873 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.541878 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.541882 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.541887 | orchestrator | 2026-03-28 00:57:03.541892 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-03-28 00:57:03.541897 | orchestrator | Saturday 28 March 2026 00:54:01 +0000 (0:00:00.331) 0:08:51.137 ******** 2026-03-28 00:57:03.541901 | orchestrator | ok: [testbed-node-3] => (item=5) 2026-03-28 00:57:03.541906 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-28 00:57:03.541911 | orchestrator | ok: [testbed-node-3] => (item=1) 2026-03-28 00:57:03.541916 | orchestrator | ok: [testbed-node-4] => (item=4) 2026-03-28 00:57:03.541920 | orchestrator | ok: [testbed-node-5] => (item=3) 2026-03-28 00:57:03.541925 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-03-28 00:57:03.541930 | orchestrator | 2026-03-28 00:57:03.541934 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-03-28 00:57:03.541939 | orchestrator | Saturday 28 March 2026 00:54:02 +0000 (0:00:01.420) 0:08:52.558 ******** 2026-03-28 00:57:03.541944 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-03-28 00:57:03.541949 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-03-28 00:57:03.541953 | orchestrator | changed: [testbed-node-4] => (item=0) 2026-03-28 00:57:03.541958 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-03-28 00:57:03.541963 | orchestrator | changed: [testbed-node-3] => (item=1) 2026-03-28 00:57:03.541968 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-03-28 00:57:03.541972 | orchestrator | 2026-03-28 00:57:03.541977 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-03-28 00:57:03.541988 | orchestrator | Saturday 28 March 2026 00:54:05 +0000 (0:00:02.603) 0:08:55.162 ******** 2026-03-28 00:57:03.541993 | orchestrator | changed: [testbed-node-4] => (item=0) 2026-03-28 00:57:03.541998 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-03-28 00:57:03.542003 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-03-28 00:57:03.542007 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-03-28 00:57:03.542012 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-03-28 00:57:03.542038 | orchestrator | changed: [testbed-node-3] => (item=1) 2026-03-28 00:57:03.542043 | orchestrator | 2026-03-28 00:57:03.542048 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-03-28 00:57:03.542052 | orchestrator | Saturday 28 March 2026 00:54:09 +0000 (0:00:03.771) 0:08:58.933 ******** 2026-03-28 00:57:03.542057 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.542062 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.542067 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-28 00:57:03.542072 | orchestrator | 2026-03-28 00:57:03.542076 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-03-28 00:57:03.542081 | orchestrator | Saturday 28 March 2026 00:54:11 +0000 (0:00:02.448) 0:09:01.382 ******** 2026-03-28 00:57:03.542086 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.542091 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.542095 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-03-28 00:57:03.542100 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-28 00:57:03.542105 | orchestrator | 2026-03-28 00:57:03.542110 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-03-28 00:57:03.542115 | orchestrator | Saturday 28 March 2026 00:54:24 +0000 (0:00:13.057) 0:09:14.439 ******** 2026-03-28 00:57:03.542119 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.542124 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.542129 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.542133 | orchestrator | 2026-03-28 00:57:03.542138 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-28 00:57:03.542143 | orchestrator | Saturday 28 March 2026 00:54:25 +0000 (0:00:00.914) 0:09:15.354 ******** 2026-03-28 00:57:03.542147 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.542152 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.542157 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.542162 | orchestrator | 2026-03-28 00:57:03.542166 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-28 00:57:03.542175 | orchestrator | Saturday 28 March 2026 00:54:26 +0000 (0:00:00.681) 0:09:16.036 ******** 2026-03-28 00:57:03.542180 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:57:03.542184 | orchestrator | 2026-03-28 00:57:03.542189 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-28 00:57:03.542194 | orchestrator | Saturday 28 March 2026 00:54:26 +0000 (0:00:00.598) 0:09:16.635 ******** 2026-03-28 00:57:03.542199 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 00:57:03.542204 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 00:57:03.542212 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 00:57:03.542217 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.542222 | orchestrator | 2026-03-28 00:57:03.542227 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-28 00:57:03.542231 | orchestrator | Saturday 28 March 2026 00:54:27 +0000 (0:00:00.454) 0:09:17.089 ******** 2026-03-28 00:57:03.542236 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.542241 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.542246 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.542250 | orchestrator | 2026-03-28 00:57:03.542260 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-28 00:57:03.542265 | orchestrator | Saturday 28 March 2026 00:54:27 +0000 (0:00:00.336) 0:09:17.426 ******** 2026-03-28 00:57:03.542270 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.542275 | orchestrator | 2026-03-28 00:57:03.542280 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-28 00:57:03.542284 | orchestrator | Saturday 28 March 2026 00:54:28 +0000 (0:00:00.835) 0:09:18.262 ******** 2026-03-28 00:57:03.542289 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.542294 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.542299 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.542303 | orchestrator | 2026-03-28 00:57:03.542308 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-28 00:57:03.542313 | orchestrator | Saturday 28 March 2026 00:54:28 +0000 (0:00:00.315) 0:09:18.577 ******** 2026-03-28 00:57:03.542318 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.542322 | orchestrator | 2026-03-28 00:57:03.542327 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-28 00:57:03.542332 | orchestrator | Saturday 28 March 2026 00:54:28 +0000 (0:00:00.247) 0:09:18.825 ******** 2026-03-28 00:57:03.542337 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.542342 | orchestrator | 2026-03-28 00:57:03.542346 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-28 00:57:03.542351 | orchestrator | Saturday 28 March 2026 00:54:29 +0000 (0:00:00.280) 0:09:19.106 ******** 2026-03-28 00:57:03.542356 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.542361 | orchestrator | 2026-03-28 00:57:03.542366 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-28 00:57:03.542370 | orchestrator | Saturday 28 March 2026 00:54:29 +0000 (0:00:00.120) 0:09:19.226 ******** 2026-03-28 00:57:03.542375 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.542380 | orchestrator | 2026-03-28 00:57:03.542385 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-28 00:57:03.542389 | orchestrator | Saturday 28 March 2026 00:54:29 +0000 (0:00:00.248) 0:09:19.474 ******** 2026-03-28 00:57:03.542394 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.542399 | orchestrator | 2026-03-28 00:57:03.542404 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-28 00:57:03.542408 | orchestrator | Saturday 28 March 2026 00:54:29 +0000 (0:00:00.241) 0:09:19.716 ******** 2026-03-28 00:57:03.542413 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 00:57:03.542418 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 00:57:03.542423 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 00:57:03.542428 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.542432 | orchestrator | 2026-03-28 00:57:03.542437 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-28 00:57:03.542442 | orchestrator | Saturday 28 March 2026 00:54:30 +0000 (0:00:00.435) 0:09:20.152 ******** 2026-03-28 00:57:03.542462 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.542469 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.542477 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.542485 | orchestrator | 2026-03-28 00:57:03.542493 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-28 00:57:03.542500 | orchestrator | Saturday 28 March 2026 00:54:30 +0000 (0:00:00.509) 0:09:20.661 ******** 2026-03-28 00:57:03.542507 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.542515 | orchestrator | 2026-03-28 00:57:03.542522 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-28 00:57:03.542527 | orchestrator | Saturday 28 March 2026 00:54:30 +0000 (0:00:00.202) 0:09:20.864 ******** 2026-03-28 00:57:03.542531 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.542536 | orchestrator | 2026-03-28 00:57:03.542541 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-03-28 00:57:03.542550 | orchestrator | 2026-03-28 00:57:03.542555 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-28 00:57:03.542560 | orchestrator | Saturday 28 March 2026 00:54:31 +0000 (0:00:00.597) 0:09:21.461 ******** 2026-03-28 00:57:03.542565 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:03.542571 | orchestrator | 2026-03-28 00:57:03.542576 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-28 00:57:03.542581 | orchestrator | Saturday 28 March 2026 00:54:32 +0000 (0:00:01.094) 0:09:22.556 ******** 2026-03-28 00:57:03.542589 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:03.542594 | orchestrator | 2026-03-28 00:57:03.542599 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-28 00:57:03.542603 | orchestrator | Saturday 28 March 2026 00:54:33 +0000 (0:00:01.181) 0:09:23.738 ******** 2026-03-28 00:57:03.542608 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.542613 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.542618 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.542622 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.542627 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.542632 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.542637 | orchestrator | 2026-03-28 00:57:03.542645 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-28 00:57:03.542649 | orchestrator | Saturday 28 March 2026 00:54:35 +0000 (0:00:01.349) 0:09:25.088 ******** 2026-03-28 00:57:03.542654 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.542659 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.542664 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.542668 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.542673 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.542678 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.542683 | orchestrator | 2026-03-28 00:57:03.542687 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-28 00:57:03.542692 | orchestrator | Saturday 28 March 2026 00:54:35 +0000 (0:00:00.715) 0:09:25.803 ******** 2026-03-28 00:57:03.542697 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.542702 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.542707 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.542711 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.542716 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.542721 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.542725 | orchestrator | 2026-03-28 00:57:03.542730 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-28 00:57:03.542735 | orchestrator | Saturday 28 March 2026 00:54:36 +0000 (0:00:00.981) 0:09:26.785 ******** 2026-03-28 00:57:03.542740 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.542745 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.542749 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.542754 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.542759 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.542763 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.542768 | orchestrator | 2026-03-28 00:57:03.542773 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-28 00:57:03.542778 | orchestrator | Saturday 28 March 2026 00:54:37 +0000 (0:00:00.903) 0:09:27.688 ******** 2026-03-28 00:57:03.542782 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.542787 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.542792 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.542797 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.542801 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.542806 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.542815 | orchestrator | 2026-03-28 00:57:03.542819 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-28 00:57:03.542824 | orchestrator | Saturday 28 March 2026 00:54:39 +0000 (0:00:01.275) 0:09:28.963 ******** 2026-03-28 00:57:03.542829 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.542834 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.542838 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.542843 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.542848 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.542853 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.542857 | orchestrator | 2026-03-28 00:57:03.542862 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-28 00:57:03.542867 | orchestrator | Saturday 28 March 2026 00:54:39 +0000 (0:00:00.682) 0:09:29.646 ******** 2026-03-28 00:57:03.542872 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.542876 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.542881 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.542886 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.542890 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.542895 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.542900 | orchestrator | 2026-03-28 00:57:03.542905 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-28 00:57:03.542909 | orchestrator | Saturday 28 March 2026 00:54:40 +0000 (0:00:00.856) 0:09:30.503 ******** 2026-03-28 00:57:03.542914 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.542919 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.542924 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.542929 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.542933 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.542938 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.542943 | orchestrator | 2026-03-28 00:57:03.542948 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-28 00:57:03.542952 | orchestrator | Saturday 28 March 2026 00:54:42 +0000 (0:00:01.446) 0:09:31.949 ******** 2026-03-28 00:57:03.542957 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.542962 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.542967 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.542971 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.542976 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.542981 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.542985 | orchestrator | 2026-03-28 00:57:03.542990 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-28 00:57:03.542995 | orchestrator | Saturday 28 March 2026 00:54:43 +0000 (0:00:01.012) 0:09:32.962 ******** 2026-03-28 00:57:03.543000 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.543004 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.543009 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.543014 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.543019 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.543023 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.543028 | orchestrator | 2026-03-28 00:57:03.543033 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-28 00:57:03.543037 | orchestrator | Saturday 28 March 2026 00:54:43 +0000 (0:00:00.898) 0:09:33.860 ******** 2026-03-28 00:57:03.543042 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.543047 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.543054 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.543059 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.543064 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.543069 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.543074 | orchestrator | 2026-03-28 00:57:03.543078 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-28 00:57:03.543083 | orchestrator | Saturday 28 March 2026 00:54:44 +0000 (0:00:00.654) 0:09:34.515 ******** 2026-03-28 00:57:03.543095 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.543100 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.543105 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.543110 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.543114 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.543119 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.543124 | orchestrator | 2026-03-28 00:57:03.543132 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-28 00:57:03.543137 | orchestrator | Saturday 28 March 2026 00:54:45 +0000 (0:00:00.950) 0:09:35.465 ******** 2026-03-28 00:57:03.543141 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.543146 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.543151 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.543156 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.543160 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.543165 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.543170 | orchestrator | 2026-03-28 00:57:03.543174 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-28 00:57:03.543179 | orchestrator | Saturday 28 March 2026 00:54:46 +0000 (0:00:00.689) 0:09:36.155 ******** 2026-03-28 00:57:03.543184 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.543189 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.543193 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.543198 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.543203 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.543208 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.543212 | orchestrator | 2026-03-28 00:57:03.543217 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-28 00:57:03.543222 | orchestrator | Saturday 28 March 2026 00:54:47 +0000 (0:00:00.960) 0:09:37.116 ******** 2026-03-28 00:57:03.543227 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.543231 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.543236 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.543241 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.543246 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.543250 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.543255 | orchestrator | 2026-03-28 00:57:03.543260 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-28 00:57:03.543264 | orchestrator | Saturday 28 March 2026 00:54:47 +0000 (0:00:00.628) 0:09:37.744 ******** 2026-03-28 00:57:03.543269 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.543274 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.543279 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.543283 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:57:03.543288 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:57:03.543293 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:57:03.543297 | orchestrator | 2026-03-28 00:57:03.543302 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-28 00:57:03.543307 | orchestrator | Saturday 28 March 2026 00:54:48 +0000 (0:00:00.965) 0:09:38.710 ******** 2026-03-28 00:57:03.543311 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.543316 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.543321 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.543326 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.543330 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.543335 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.543340 | orchestrator | 2026-03-28 00:57:03.543345 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-28 00:57:03.543349 | orchestrator | Saturday 28 March 2026 00:54:49 +0000 (0:00:00.692) 0:09:39.403 ******** 2026-03-28 00:57:03.543354 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.543359 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.543364 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.543369 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.543377 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.543382 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.543387 | orchestrator | 2026-03-28 00:57:03.543392 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-28 00:57:03.543396 | orchestrator | Saturday 28 March 2026 00:54:50 +0000 (0:00:00.926) 0:09:40.330 ******** 2026-03-28 00:57:03.543401 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.543406 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.543410 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.543415 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.543420 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.543424 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.543429 | orchestrator | 2026-03-28 00:57:03.543434 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-03-28 00:57:03.543438 | orchestrator | Saturday 28 March 2026 00:54:51 +0000 (0:00:01.355) 0:09:41.685 ******** 2026-03-28 00:57:03.543476 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-28 00:57:03.543483 | orchestrator | 2026-03-28 00:57:03.543488 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-03-28 00:57:03.543493 | orchestrator | Saturday 28 March 2026 00:54:54 +0000 (0:00:03.145) 0:09:44.831 ******** 2026-03-28 00:57:03.543497 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-28 00:57:03.543502 | orchestrator | 2026-03-28 00:57:03.543507 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-03-28 00:57:03.543512 | orchestrator | Saturday 28 March 2026 00:54:56 +0000 (0:00:01.725) 0:09:46.556 ******** 2026-03-28 00:57:03.543517 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:57:03.543521 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:57:03.543526 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:57:03.543531 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.543536 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:03.543541 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:03.543545 | orchestrator | 2026-03-28 00:57:03.543550 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-03-28 00:57:03.543555 | orchestrator | Saturday 28 March 2026 00:54:58 +0000 (0:00:01.666) 0:09:48.223 ******** 2026-03-28 00:57:03.543563 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:57:03.543568 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:57:03.543572 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:57:03.543577 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:03.543582 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:03.543587 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:03.543591 | orchestrator | 2026-03-28 00:57:03.543596 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-03-28 00:57:03.543601 | orchestrator | Saturday 28 March 2026 00:54:59 +0000 (0:00:01.331) 0:09:49.554 ******** 2026-03-28 00:57:03.543609 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:03.543615 | orchestrator | 2026-03-28 00:57:03.543620 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-03-28 00:57:03.543625 | orchestrator | Saturday 28 March 2026 00:55:01 +0000 (0:00:01.387) 0:09:50.942 ******** 2026-03-28 00:57:03.543630 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:57:03.543634 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:57:03.543639 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:57:03.543644 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:03.543649 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:03.543653 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:03.543658 | orchestrator | 2026-03-28 00:57:03.543663 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-03-28 00:57:03.543668 | orchestrator | Saturday 28 March 2026 00:55:02 +0000 (0:00:01.631) 0:09:52.573 ******** 2026-03-28 00:57:03.543673 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:57:03.543681 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:57:03.543686 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:03.543691 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:03.543695 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:03.543700 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:57:03.543705 | orchestrator | 2026-03-28 00:57:03.543710 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-03-28 00:57:03.543714 | orchestrator | Saturday 28 March 2026 00:55:06 +0000 (0:00:04.223) 0:09:56.797 ******** 2026-03-28 00:57:03.543719 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:57:03.543724 | orchestrator | 2026-03-28 00:57:03.543728 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-03-28 00:57:03.543733 | orchestrator | Saturday 28 March 2026 00:55:08 +0000 (0:00:01.404) 0:09:58.202 ******** 2026-03-28 00:57:03.543737 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.543742 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.543746 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.543751 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.543755 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.543760 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.543764 | orchestrator | 2026-03-28 00:57:03.543769 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-03-28 00:57:03.543773 | orchestrator | Saturday 28 March 2026 00:55:09 +0000 (0:00:00.807) 0:09:59.009 ******** 2026-03-28 00:57:03.543778 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:57:03.543782 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:57:03.543787 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:57:03.543791 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:57:03.543796 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:57:03.543800 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:57:03.543805 | orchestrator | 2026-03-28 00:57:03.543809 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-03-28 00:57:03.543814 | orchestrator | Saturday 28 March 2026 00:55:11 +0000 (0:00:02.746) 0:10:01.756 ******** 2026-03-28 00:57:03.543818 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.543823 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.543827 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.543832 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:57:03.543836 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:57:03.543841 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:57:03.543912 | orchestrator | 2026-03-28 00:57:03.543917 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-03-28 00:57:03.543922 | orchestrator | 2026-03-28 00:57:03.543926 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-28 00:57:03.543931 | orchestrator | Saturday 28 March 2026 00:55:13 +0000 (0:00:01.193) 0:10:02.950 ******** 2026-03-28 00:57:03.543936 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:57:03.543941 | orchestrator | 2026-03-28 00:57:03.543945 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-28 00:57:03.543950 | orchestrator | Saturday 28 March 2026 00:55:13 +0000 (0:00:00.511) 0:10:03.461 ******** 2026-03-28 00:57:03.543954 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:57:03.543959 | orchestrator | 2026-03-28 00:57:03.543963 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-28 00:57:03.543968 | orchestrator | Saturday 28 March 2026 00:55:14 +0000 (0:00:00.798) 0:10:04.259 ******** 2026-03-28 00:57:03.543972 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.543977 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.543981 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.543990 | orchestrator | 2026-03-28 00:57:03.543995 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-28 00:57:03.543999 | orchestrator | Saturday 28 March 2026 00:55:14 +0000 (0:00:00.314) 0:10:04.574 ******** 2026-03-28 00:57:03.544004 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.544008 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.544013 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.544017 | orchestrator | 2026-03-28 00:57:03.544022 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-28 00:57:03.544030 | orchestrator | Saturday 28 March 2026 00:55:15 +0000 (0:00:00.679) 0:10:05.253 ******** 2026-03-28 00:57:03.544035 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.544039 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.544044 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.544048 | orchestrator | 2026-03-28 00:57:03.544052 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-28 00:57:03.544057 | orchestrator | Saturday 28 March 2026 00:55:16 +0000 (0:00:00.659) 0:10:05.912 ******** 2026-03-28 00:57:03.544062 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.544066 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.544070 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.544075 | orchestrator | 2026-03-28 00:57:03.544079 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-28 00:57:03.544087 | orchestrator | Saturday 28 March 2026 00:55:16 +0000 (0:00:00.698) 0:10:06.611 ******** 2026-03-28 00:57:03.544092 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.544097 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.544101 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.544105 | orchestrator | 2026-03-28 00:57:03.544110 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-28 00:57:03.544115 | orchestrator | Saturday 28 March 2026 00:55:17 +0000 (0:00:00.638) 0:10:07.250 ******** 2026-03-28 00:57:03.544119 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.544123 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.544128 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.544132 | orchestrator | 2026-03-28 00:57:03.544137 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-28 00:57:03.544144 | orchestrator | Saturday 28 March 2026 00:55:17 +0000 (0:00:00.342) 0:10:07.593 ******** 2026-03-28 00:57:03.544152 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.544159 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.544166 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.544173 | orchestrator | 2026-03-28 00:57:03.544179 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-28 00:57:03.544186 | orchestrator | Saturday 28 March 2026 00:55:18 +0000 (0:00:00.348) 0:10:07.941 ******** 2026-03-28 00:57:03.544192 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.544199 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.544206 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.544212 | orchestrator | 2026-03-28 00:57:03.544219 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-28 00:57:03.544225 | orchestrator | Saturday 28 March 2026 00:55:18 +0000 (0:00:00.710) 0:10:08.652 ******** 2026-03-28 00:57:03.544232 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.544240 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.544248 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.544255 | orchestrator | 2026-03-28 00:57:03.544262 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-28 00:57:03.544271 | orchestrator | Saturday 28 March 2026 00:55:19 +0000 (0:00:01.172) 0:10:09.825 ******** 2026-03-28 00:57:03.544276 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.544280 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.544285 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.544289 | orchestrator | 2026-03-28 00:57:03.544294 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-28 00:57:03.544305 | orchestrator | Saturday 28 March 2026 00:55:20 +0000 (0:00:00.525) 0:10:10.350 ******** 2026-03-28 00:57:03.544310 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.544314 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.544319 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.544324 | orchestrator | 2026-03-28 00:57:03.544328 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-28 00:57:03.544333 | orchestrator | Saturday 28 March 2026 00:55:20 +0000 (0:00:00.491) 0:10:10.842 ******** 2026-03-28 00:57:03.544337 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.544342 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.544346 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.544351 | orchestrator | 2026-03-28 00:57:03.544355 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-28 00:57:03.544360 | orchestrator | Saturday 28 March 2026 00:55:21 +0000 (0:00:00.339) 0:10:11.181 ******** 2026-03-28 00:57:03.544364 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.544369 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.544373 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.544378 | orchestrator | 2026-03-28 00:57:03.544382 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-28 00:57:03.544387 | orchestrator | Saturday 28 March 2026 00:55:22 +0000 (0:00:00.781) 0:10:11.963 ******** 2026-03-28 00:57:03.544391 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.544396 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.544400 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.544405 | orchestrator | 2026-03-28 00:57:03.544409 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-28 00:57:03.544414 | orchestrator | Saturday 28 March 2026 00:55:22 +0000 (0:00:00.407) 0:10:12.371 ******** 2026-03-28 00:57:03.544418 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.544423 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.544427 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.544432 | orchestrator | 2026-03-28 00:57:03.544436 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-28 00:57:03.544441 | orchestrator | Saturday 28 March 2026 00:55:22 +0000 (0:00:00.330) 0:10:12.702 ******** 2026-03-28 00:57:03.544462 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.544467 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.544472 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.544476 | orchestrator | 2026-03-28 00:57:03.544481 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-28 00:57:03.544485 | orchestrator | Saturday 28 March 2026 00:55:23 +0000 (0:00:00.362) 0:10:13.064 ******** 2026-03-28 00:57:03.544490 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.544495 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.544499 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.544504 | orchestrator | 2026-03-28 00:57:03.544508 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-28 00:57:03.544513 | orchestrator | Saturday 28 March 2026 00:55:23 +0000 (0:00:00.692) 0:10:13.756 ******** 2026-03-28 00:57:03.544517 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.544530 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.544535 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.544540 | orchestrator | 2026-03-28 00:57:03.544544 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-28 00:57:03.544549 | orchestrator | Saturday 28 March 2026 00:55:24 +0000 (0:00:00.362) 0:10:14.119 ******** 2026-03-28 00:57:03.544553 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.544558 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.544562 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.544567 | orchestrator | 2026-03-28 00:57:03.544571 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-03-28 00:57:03.544576 | orchestrator | Saturday 28 March 2026 00:55:24 +0000 (0:00:00.610) 0:10:14.729 ******** 2026-03-28 00:57:03.544591 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.544596 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.544600 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-03-28 00:57:03.544605 | orchestrator | 2026-03-28 00:57:03.544609 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-03-28 00:57:03.544614 | orchestrator | Saturday 28 March 2026 00:55:25 +0000 (0:00:00.768) 0:10:15.498 ******** 2026-03-28 00:57:03.544618 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-28 00:57:03.544623 | orchestrator | 2026-03-28 00:57:03.544628 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-03-28 00:57:03.544632 | orchestrator | Saturday 28 March 2026 00:55:27 +0000 (0:00:01.779) 0:10:17.277 ******** 2026-03-28 00:57:03.544639 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-03-28 00:57:03.544645 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.544650 | orchestrator | 2026-03-28 00:57:03.544654 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-03-28 00:57:03.544659 | orchestrator | Saturday 28 March 2026 00:55:27 +0000 (0:00:00.240) 0:10:17.517 ******** 2026-03-28 00:57:03.544665 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-28 00:57:03.544676 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-28 00:57:03.544680 | orchestrator | 2026-03-28 00:57:03.544685 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-03-28 00:57:03.544689 | orchestrator | Saturday 28 March 2026 00:55:34 +0000 (0:00:06.444) 0:10:23.961 ******** 2026-03-28 00:57:03.544694 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-28 00:57:03.544699 | orchestrator | 2026-03-28 00:57:03.544703 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-03-28 00:57:03.544708 | orchestrator | Saturday 28 March 2026 00:55:36 +0000 (0:00:02.891) 0:10:26.853 ******** 2026-03-28 00:57:03.544712 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:57:03.544717 | orchestrator | 2026-03-28 00:57:03.544721 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-03-28 00:57:03.544726 | orchestrator | Saturday 28 March 2026 00:55:37 +0000 (0:00:00.833) 0:10:27.687 ******** 2026-03-28 00:57:03.544730 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-28 00:57:03.544735 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-28 00:57:03.544739 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-28 00:57:03.544744 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-03-28 00:57:03.544748 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-03-28 00:57:03.544753 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-03-28 00:57:03.544758 | orchestrator | 2026-03-28 00:57:03.544762 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-03-28 00:57:03.544767 | orchestrator | Saturday 28 March 2026 00:55:38 +0000 (0:00:01.007) 0:10:28.695 ******** 2026-03-28 00:57:03.544771 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 00:57:03.544776 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-28 00:57:03.544785 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-28 00:57:03.544789 | orchestrator | 2026-03-28 00:57:03.544794 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-03-28 00:57:03.544798 | orchestrator | Saturday 28 March 2026 00:55:40 +0000 (0:00:01.661) 0:10:30.356 ******** 2026-03-28 00:57:03.544803 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-28 00:57:03.544808 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-28 00:57:03.544812 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:57:03.544817 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-28 00:57:03.544821 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-28 00:57:03.544826 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:57:03.544831 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-28 00:57:03.544838 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-28 00:57:03.544843 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:57:03.544848 | orchestrator | 2026-03-28 00:57:03.544852 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-03-28 00:57:03.544857 | orchestrator | Saturday 28 March 2026 00:55:41 +0000 (0:00:01.383) 0:10:31.740 ******** 2026-03-28 00:57:03.544861 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:57:03.544866 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:57:03.544870 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:57:03.544875 | orchestrator | 2026-03-28 00:57:03.544879 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-03-28 00:57:03.544884 | orchestrator | Saturday 28 March 2026 00:55:44 +0000 (0:00:02.593) 0:10:34.333 ******** 2026-03-28 00:57:03.544891 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.544896 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.544900 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.544905 | orchestrator | 2026-03-28 00:57:03.544909 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-03-28 00:57:03.544914 | orchestrator | Saturday 28 March 2026 00:55:44 +0000 (0:00:00.363) 0:10:34.697 ******** 2026-03-28 00:57:03.544919 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:57:03.544923 | orchestrator | 2026-03-28 00:57:03.544928 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-03-28 00:57:03.544932 | orchestrator | Saturday 28 March 2026 00:55:45 +0000 (0:00:00.601) 0:10:35.299 ******** 2026-03-28 00:57:03.544937 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:57:03.544942 | orchestrator | 2026-03-28 00:57:03.544946 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-03-28 00:57:03.544951 | orchestrator | Saturday 28 March 2026 00:55:46 +0000 (0:00:00.834) 0:10:36.134 ******** 2026-03-28 00:57:03.544955 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:57:03.544960 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:57:03.544964 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:57:03.544969 | orchestrator | 2026-03-28 00:57:03.544973 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-03-28 00:57:03.544978 | orchestrator | Saturday 28 March 2026 00:55:47 +0000 (0:00:01.249) 0:10:37.383 ******** 2026-03-28 00:57:03.544982 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:57:03.544987 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:57:03.544992 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:57:03.544996 | orchestrator | 2026-03-28 00:57:03.545001 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-03-28 00:57:03.545005 | orchestrator | Saturday 28 March 2026 00:55:48 +0000 (0:00:01.177) 0:10:38.560 ******** 2026-03-28 00:57:03.545010 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:57:03.545014 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:57:03.545019 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:57:03.545026 | orchestrator | 2026-03-28 00:57:03.545031 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-03-28 00:57:03.545036 | orchestrator | Saturday 28 March 2026 00:55:50 +0000 (0:00:01.937) 0:10:40.498 ******** 2026-03-28 00:57:03.545040 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:57:03.545045 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:57:03.545049 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:57:03.545054 | orchestrator | 2026-03-28 00:57:03.545058 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-03-28 00:57:03.545063 | orchestrator | Saturday 28 March 2026 00:55:52 +0000 (0:00:01.874) 0:10:42.373 ******** 2026-03-28 00:57:03.545067 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.545072 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.545076 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.545081 | orchestrator | 2026-03-28 00:57:03.545086 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-28 00:57:03.545090 | orchestrator | Saturday 28 March 2026 00:55:54 +0000 (0:00:01.613) 0:10:43.987 ******** 2026-03-28 00:57:03.545095 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:57:03.545099 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:57:03.545104 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:57:03.545108 | orchestrator | 2026-03-28 00:57:03.545113 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-28 00:57:03.545117 | orchestrator | Saturday 28 March 2026 00:55:54 +0000 (0:00:00.695) 0:10:44.682 ******** 2026-03-28 00:57:03.545122 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:57:03.545127 | orchestrator | 2026-03-28 00:57:03.545131 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-28 00:57:03.545136 | orchestrator | Saturday 28 March 2026 00:55:55 +0000 (0:00:00.612) 0:10:45.295 ******** 2026-03-28 00:57:03.545140 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.545145 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.545149 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.545154 | orchestrator | 2026-03-28 00:57:03.545158 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-28 00:57:03.545163 | orchestrator | Saturday 28 March 2026 00:55:56 +0000 (0:00:00.616) 0:10:45.912 ******** 2026-03-28 00:57:03.545168 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:57:03.545172 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:57:03.545177 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:57:03.545181 | orchestrator | 2026-03-28 00:57:03.545186 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-28 00:57:03.545190 | orchestrator | Saturday 28 March 2026 00:55:57 +0000 (0:00:01.213) 0:10:47.126 ******** 2026-03-28 00:57:03.545195 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 00:57:03.545200 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 00:57:03.545204 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 00:57:03.545209 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.545213 | orchestrator | 2026-03-28 00:57:03.545218 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-28 00:57:03.545225 | orchestrator | Saturday 28 March 2026 00:55:57 +0000 (0:00:00.738) 0:10:47.864 ******** 2026-03-28 00:57:03.545230 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.545235 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.545239 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.545244 | orchestrator | 2026-03-28 00:57:03.545248 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-28 00:57:03.545253 | orchestrator | 2026-03-28 00:57:03.545257 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-28 00:57:03.545262 | orchestrator | Saturday 28 March 2026 00:55:58 +0000 (0:00:00.620) 0:10:48.485 ******** 2026-03-28 00:57:03.545269 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:57:03.545278 | orchestrator | 2026-03-28 00:57:03.545282 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-28 00:57:03.545287 | orchestrator | Saturday 28 March 2026 00:55:59 +0000 (0:00:00.843) 0:10:49.328 ******** 2026-03-28 00:57:03.545291 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:57:03.545296 | orchestrator | 2026-03-28 00:57:03.545300 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-28 00:57:03.545305 | orchestrator | Saturday 28 March 2026 00:56:00 +0000 (0:00:00.610) 0:10:49.939 ******** 2026-03-28 00:57:03.545309 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.545314 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.545319 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.545323 | orchestrator | 2026-03-28 00:57:03.545328 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-28 00:57:03.545332 | orchestrator | Saturday 28 March 2026 00:56:00 +0000 (0:00:00.620) 0:10:50.560 ******** 2026-03-28 00:57:03.545337 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.545341 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.545346 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.545350 | orchestrator | 2026-03-28 00:57:03.545355 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-28 00:57:03.545359 | orchestrator | Saturday 28 March 2026 00:56:01 +0000 (0:00:00.739) 0:10:51.299 ******** 2026-03-28 00:57:03.545364 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.545368 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.545373 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.545377 | orchestrator | 2026-03-28 00:57:03.545382 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-28 00:57:03.545387 | orchestrator | Saturday 28 March 2026 00:56:02 +0000 (0:00:00.806) 0:10:52.105 ******** 2026-03-28 00:57:03.545391 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.545396 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.545400 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.545405 | orchestrator | 2026-03-28 00:57:03.545409 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-28 00:57:03.545414 | orchestrator | Saturday 28 March 2026 00:56:02 +0000 (0:00:00.709) 0:10:52.815 ******** 2026-03-28 00:57:03.545418 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.545423 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.545428 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.545432 | orchestrator | 2026-03-28 00:57:03.545437 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-28 00:57:03.545441 | orchestrator | Saturday 28 March 2026 00:56:03 +0000 (0:00:00.577) 0:10:53.392 ******** 2026-03-28 00:57:03.545458 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.545463 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.545467 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.545472 | orchestrator | 2026-03-28 00:57:03.545476 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-28 00:57:03.545481 | orchestrator | Saturday 28 March 2026 00:56:03 +0000 (0:00:00.318) 0:10:53.710 ******** 2026-03-28 00:57:03.545485 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.545490 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.545494 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.545499 | orchestrator | 2026-03-28 00:57:03.545503 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-28 00:57:03.545508 | orchestrator | Saturday 28 March 2026 00:56:04 +0000 (0:00:00.337) 0:10:54.047 ******** 2026-03-28 00:57:03.545512 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.545517 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.545521 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.545526 | orchestrator | 2026-03-28 00:57:03.545534 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-28 00:57:03.545538 | orchestrator | Saturday 28 March 2026 00:56:05 +0000 (0:00:00.835) 0:10:54.883 ******** 2026-03-28 00:57:03.545543 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.545547 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.545552 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.545556 | orchestrator | 2026-03-28 00:57:03.545561 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-28 00:57:03.545565 | orchestrator | Saturday 28 March 2026 00:56:06 +0000 (0:00:00.999) 0:10:55.882 ******** 2026-03-28 00:57:03.545570 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.545574 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.545579 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.545583 | orchestrator | 2026-03-28 00:57:03.545588 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-28 00:57:03.545592 | orchestrator | Saturday 28 March 2026 00:56:06 +0000 (0:00:00.351) 0:10:56.234 ******** 2026-03-28 00:57:03.545597 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.545601 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.545606 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.545610 | orchestrator | 2026-03-28 00:57:03.545615 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-28 00:57:03.545619 | orchestrator | Saturday 28 March 2026 00:56:06 +0000 (0:00:00.318) 0:10:56.553 ******** 2026-03-28 00:57:03.545624 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.545628 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.545633 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.545637 | orchestrator | 2026-03-28 00:57:03.545644 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-28 00:57:03.545649 | orchestrator | Saturday 28 March 2026 00:56:07 +0000 (0:00:00.327) 0:10:56.880 ******** 2026-03-28 00:57:03.545653 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.545658 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.545662 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.545667 | orchestrator | 2026-03-28 00:57:03.545671 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-28 00:57:03.545676 | orchestrator | Saturday 28 March 2026 00:56:07 +0000 (0:00:00.617) 0:10:57.498 ******** 2026-03-28 00:57:03.545681 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.545685 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.545690 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.545694 | orchestrator | 2026-03-28 00:57:03.545701 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-28 00:57:03.545706 | orchestrator | Saturday 28 March 2026 00:56:07 +0000 (0:00:00.326) 0:10:57.825 ******** 2026-03-28 00:57:03.545711 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.545715 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.545720 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.545724 | orchestrator | 2026-03-28 00:57:03.545729 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-28 00:57:03.545733 | orchestrator | Saturday 28 March 2026 00:56:08 +0000 (0:00:00.309) 0:10:58.134 ******** 2026-03-28 00:57:03.545738 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.545742 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.545747 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.545751 | orchestrator | 2026-03-28 00:57:03.545756 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-28 00:57:03.545761 | orchestrator | Saturday 28 March 2026 00:56:08 +0000 (0:00:00.298) 0:10:58.433 ******** 2026-03-28 00:57:03.545765 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.545770 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.545774 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.545779 | orchestrator | 2026-03-28 00:57:03.545783 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-28 00:57:03.545791 | orchestrator | Saturday 28 March 2026 00:56:09 +0000 (0:00:00.607) 0:10:59.040 ******** 2026-03-28 00:57:03.545796 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.545800 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.545805 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.545809 | orchestrator | 2026-03-28 00:57:03.545814 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-28 00:57:03.545818 | orchestrator | Saturday 28 March 2026 00:56:09 +0000 (0:00:00.336) 0:10:59.376 ******** 2026-03-28 00:57:03.545823 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.545827 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.545832 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.545836 | orchestrator | 2026-03-28 00:57:03.545841 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-03-28 00:57:03.545846 | orchestrator | Saturday 28 March 2026 00:56:10 +0000 (0:00:00.537) 0:10:59.913 ******** 2026-03-28 00:57:03.545850 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:57:03.545855 | orchestrator | 2026-03-28 00:57:03.545859 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-28 00:57:03.545864 | orchestrator | Saturday 28 March 2026 00:56:10 +0000 (0:00:00.871) 0:11:00.785 ******** 2026-03-28 00:57:03.545868 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 00:57:03.545873 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-28 00:57:03.545877 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-28 00:57:03.545882 | orchestrator | 2026-03-28 00:57:03.545887 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-28 00:57:03.545891 | orchestrator | Saturday 28 March 2026 00:56:12 +0000 (0:00:01.836) 0:11:02.621 ******** 2026-03-28 00:57:03.545896 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-28 00:57:03.545900 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-28 00:57:03.545905 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:57:03.545909 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-28 00:57:03.545914 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-28 00:57:03.545918 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:57:03.545923 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-28 00:57:03.545927 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-28 00:57:03.545932 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:57:03.545936 | orchestrator | 2026-03-28 00:57:03.545941 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-03-28 00:57:03.545946 | orchestrator | Saturday 28 March 2026 00:56:13 +0000 (0:00:01.238) 0:11:03.860 ******** 2026-03-28 00:57:03.545950 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.545955 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.545959 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.545964 | orchestrator | 2026-03-28 00:57:03.545968 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-03-28 00:57:03.545973 | orchestrator | Saturday 28 March 2026 00:56:14 +0000 (0:00:00.335) 0:11:04.195 ******** 2026-03-28 00:57:03.545977 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:57:03.545982 | orchestrator | 2026-03-28 00:57:03.545987 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-03-28 00:57:03.545991 | orchestrator | Saturday 28 March 2026 00:56:15 +0000 (0:00:00.835) 0:11:05.030 ******** 2026-03-28 00:57:03.545996 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-28 00:57:03.546003 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-28 00:57:03.546011 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-28 00:57:03.546041 | orchestrator | 2026-03-28 00:57:03.546046 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-03-28 00:57:03.546051 | orchestrator | Saturday 28 March 2026 00:56:16 +0000 (0:00:00.911) 0:11:05.942 ******** 2026-03-28 00:57:03.546055 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 00:57:03.546063 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-28 00:57:03.546068 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 00:57:03.546072 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-28 00:57:03.546077 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 00:57:03.546082 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-28 00:57:03.546086 | orchestrator | 2026-03-28 00:57:03.546091 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-28 00:57:03.546096 | orchestrator | Saturday 28 March 2026 00:56:19 +0000 (0:00:03.730) 0:11:09.673 ******** 2026-03-28 00:57:03.546100 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 00:57:03.546105 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-28 00:57:03.546109 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 00:57:03.546114 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-28 00:57:03.546118 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 00:57:03.546123 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-28 00:57:03.546127 | orchestrator | 2026-03-28 00:57:03.546132 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-28 00:57:03.546136 | orchestrator | Saturday 28 March 2026 00:56:22 +0000 (0:00:02.281) 0:11:11.955 ******** 2026-03-28 00:57:03.546141 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-28 00:57:03.546145 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:57:03.546150 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-28 00:57:03.546155 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:57:03.546159 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-28 00:57:03.546164 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:57:03.546168 | orchestrator | 2026-03-28 00:57:03.546173 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-03-28 00:57:03.546177 | orchestrator | Saturday 28 March 2026 00:56:23 +0000 (0:00:01.514) 0:11:13.469 ******** 2026-03-28 00:57:03.546182 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-03-28 00:57:03.546187 | orchestrator | 2026-03-28 00:57:03.546191 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-03-28 00:57:03.546196 | orchestrator | Saturday 28 March 2026 00:56:24 +0000 (0:00:00.472) 0:11:13.942 ******** 2026-03-28 00:57:03.546200 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 00:57:03.546205 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 00:57:03.546210 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 00:57:03.546214 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 00:57:03.546222 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 00:57:03.546227 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.546231 | orchestrator | 2026-03-28 00:57:03.546236 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-03-28 00:57:03.546240 | orchestrator | Saturday 28 March 2026 00:56:24 +0000 (0:00:00.618) 0:11:14.560 ******** 2026-03-28 00:57:03.546245 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 00:57:03.546250 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 00:57:03.546254 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 00:57:03.546259 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 00:57:03.546263 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-28 00:57:03.546271 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.546275 | orchestrator | 2026-03-28 00:57:03.546280 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-03-28 00:57:03.546285 | orchestrator | Saturday 28 March 2026 00:56:25 +0000 (0:00:01.045) 0:11:15.605 ******** 2026-03-28 00:57:03.546289 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-28 00:57:03.546297 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-28 00:57:03.546302 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-28 00:57:03.546306 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-28 00:57:03.546311 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-28 00:57:03.546316 | orchestrator | 2026-03-28 00:57:03.546320 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-03-28 00:57:03.546325 | orchestrator | Saturday 28 March 2026 00:56:49 +0000 (0:00:23.462) 0:11:39.068 ******** 2026-03-28 00:57:03.546329 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.546334 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.546338 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.546343 | orchestrator | 2026-03-28 00:57:03.546348 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-03-28 00:57:03.546352 | orchestrator | Saturday 28 March 2026 00:56:49 +0000 (0:00:00.555) 0:11:39.623 ******** 2026-03-28 00:57:03.546357 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.546361 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.546366 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.546370 | orchestrator | 2026-03-28 00:57:03.546375 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-03-28 00:57:03.546380 | orchestrator | Saturday 28 March 2026 00:56:50 +0000 (0:00:00.357) 0:11:39.980 ******** 2026-03-28 00:57:03.546384 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:57:03.546389 | orchestrator | 2026-03-28 00:57:03.546393 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-03-28 00:57:03.546401 | orchestrator | Saturday 28 March 2026 00:56:50 +0000 (0:00:00.704) 0:11:40.685 ******** 2026-03-28 00:57:03.546406 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:57:03.546410 | orchestrator | 2026-03-28 00:57:03.546415 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-03-28 00:57:03.546419 | orchestrator | Saturday 28 March 2026 00:56:51 +0000 (0:00:00.725) 0:11:41.410 ******** 2026-03-28 00:57:03.546424 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:57:03.546428 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:57:03.546433 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:57:03.546437 | orchestrator | 2026-03-28 00:57:03.546442 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-03-28 00:57:03.546482 | orchestrator | Saturday 28 March 2026 00:56:53 +0000 (0:00:01.537) 0:11:42.948 ******** 2026-03-28 00:57:03.546490 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:57:03.546497 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:57:03.546504 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:57:03.546508 | orchestrator | 2026-03-28 00:57:03.546513 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-03-28 00:57:03.546517 | orchestrator | Saturday 28 March 2026 00:56:54 +0000 (0:00:01.223) 0:11:44.172 ******** 2026-03-28 00:57:03.546522 | orchestrator | changed: [testbed-node-5] 2026-03-28 00:57:03.546526 | orchestrator | changed: [testbed-node-4] 2026-03-28 00:57:03.546530 | orchestrator | changed: [testbed-node-3] 2026-03-28 00:57:03.546535 | orchestrator | 2026-03-28 00:57:03.546539 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-03-28 00:57:03.546544 | orchestrator | Saturday 28 March 2026 00:56:56 +0000 (0:00:02.203) 0:11:46.375 ******** 2026-03-28 00:57:03.546548 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-28 00:57:03.546553 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-28 00:57:03.546558 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-28 00:57:03.546562 | orchestrator | 2026-03-28 00:57:03.546567 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-28 00:57:03.546571 | orchestrator | Saturday 28 March 2026 00:56:58 +0000 (0:00:02.411) 0:11:48.787 ******** 2026-03-28 00:57:03.546576 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.546581 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.546585 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.546590 | orchestrator | 2026-03-28 00:57:03.546594 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-28 00:57:03.546599 | orchestrator | Saturday 28 March 2026 00:56:59 +0000 (0:00:00.482) 0:11:49.270 ******** 2026-03-28 00:57:03.546606 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:57:03.546611 | orchestrator | 2026-03-28 00:57:03.546616 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-28 00:57:03.546620 | orchestrator | Saturday 28 March 2026 00:56:59 +0000 (0:00:00.498) 0:11:49.768 ******** 2026-03-28 00:57:03.546625 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.546629 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.546634 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.546638 | orchestrator | 2026-03-28 00:57:03.546643 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-28 00:57:03.546648 | orchestrator | Saturday 28 March 2026 00:57:00 +0000 (0:00:00.376) 0:11:50.145 ******** 2026-03-28 00:57:03.546652 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.546660 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:57:03.546665 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:57:03.546675 | orchestrator | 2026-03-28 00:57:03.546679 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-28 00:57:03.546684 | orchestrator | Saturday 28 March 2026 00:57:00 +0000 (0:00:00.464) 0:11:50.609 ******** 2026-03-28 00:57:03.546688 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 00:57:03.546693 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 00:57:03.546698 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 00:57:03.546702 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:57:03.546707 | orchestrator | 2026-03-28 00:57:03.546711 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-28 00:57:03.546716 | orchestrator | Saturday 28 March 2026 00:57:02 +0000 (0:00:01.408) 0:11:52.018 ******** 2026-03-28 00:57:03.546720 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:57:03.546725 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:57:03.546729 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:57:03.546734 | orchestrator | 2026-03-28 00:57:03.546738 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:57:03.546743 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-03-28 00:57:03.546748 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-03-28 00:57:03.546752 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-03-28 00:57:03.546757 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-03-28 00:57:03.546761 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-03-28 00:57:03.546766 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-03-28 00:57:03.546770 | orchestrator | 2026-03-28 00:57:03.546775 | orchestrator | 2026-03-28 00:57:03.546780 | orchestrator | 2026-03-28 00:57:03.546784 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:57:03.546789 | orchestrator | Saturday 28 March 2026 00:57:02 +0000 (0:00:00.401) 0:11:52.420 ******** 2026-03-28 00:57:03.546793 | orchestrator | =============================================================================== 2026-03-28 00:57:03.546798 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 67.75s 2026-03-28 00:57:03.546802 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 41.41s 2026-03-28 00:57:03.546807 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 23.46s 2026-03-28 00:57:03.546811 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.46s 2026-03-28 00:57:03.546816 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 13.58s 2026-03-28 00:57:03.546820 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 13.06s 2026-03-28 00:57:03.546825 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 11.26s 2026-03-28 00:57:03.546830 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node --------------------- 8.86s 2026-03-28 00:57:03.546834 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 7.77s 2026-03-28 00:57:03.546839 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.62s 2026-03-28 00:57:03.546843 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 6.46s 2026-03-28 00:57:03.546847 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 6.44s 2026-03-28 00:57:03.546851 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.76s 2026-03-28 00:57:03.546859 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 4.51s 2026-03-28 00:57:03.546863 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 4.39s 2026-03-28 00:57:03.546867 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 4.22s 2026-03-28 00:57:03.546871 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 4.17s 2026-03-28 00:57:03.546875 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.77s 2026-03-28 00:57:03.546879 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.74s 2026-03-28 00:57:03.546883 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 3.73s 2026-03-28 00:57:06.566081 | orchestrator | 2026-03-28 00:57:06 | INFO  | Task e69e920c-198f-405e-b326-b9ac960ea778 is in state STARTED 2026-03-28 00:57:06.568555 | orchestrator | 2026-03-28 00:57:06 | INFO  | Task e4971d96-eea0-4612-bcb8-2ac73332beb4 is in state STARTED 2026-03-28 00:57:06.570283 | orchestrator | 2026-03-28 00:57:06 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:57:06.570317 | orchestrator | 2026-03-28 00:57:06 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:57:09.623088 | orchestrator | 2026-03-28 00:57:09 | INFO  | Task e69e920c-198f-405e-b326-b9ac960ea778 is in state STARTED 2026-03-28 00:57:09.623177 | orchestrator | 2026-03-28 00:57:09 | INFO  | Task e4971d96-eea0-4612-bcb8-2ac73332beb4 is in state STARTED 2026-03-28 00:57:09.623561 | orchestrator | 2026-03-28 00:57:09 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:57:09.623583 | orchestrator | 2026-03-28 00:57:09 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:57:12.664113 | orchestrator | 2026-03-28 00:57:12 | INFO  | Task e69e920c-198f-405e-b326-b9ac960ea778 is in state STARTED 2026-03-28 00:57:12.667962 | orchestrator | 2026-03-28 00:57:12 | INFO  | Task e4971d96-eea0-4612-bcb8-2ac73332beb4 is in state STARTED 2026-03-28 00:57:12.670514 | orchestrator | 2026-03-28 00:57:12 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:57:12.670951 | orchestrator | 2026-03-28 00:57:12 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:57:15.712841 | orchestrator | 2026-03-28 00:57:15 | INFO  | Task e69e920c-198f-405e-b326-b9ac960ea778 is in state STARTED 2026-03-28 00:57:15.712952 | orchestrator | 2026-03-28 00:57:15 | INFO  | Task e4971d96-eea0-4612-bcb8-2ac73332beb4 is in state STARTED 2026-03-28 00:57:15.715506 | orchestrator | 2026-03-28 00:57:15 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:57:15.715630 | orchestrator | 2026-03-28 00:57:15 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:57:18.764775 | orchestrator | 2026-03-28 00:57:18 | INFO  | Task e69e920c-198f-405e-b326-b9ac960ea778 is in state STARTED 2026-03-28 00:57:18.767760 | orchestrator | 2026-03-28 00:57:18 | INFO  | Task e4971d96-eea0-4612-bcb8-2ac73332beb4 is in state STARTED 2026-03-28 00:57:18.768996 | orchestrator | 2026-03-28 00:57:18 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:57:18.769031 | orchestrator | 2026-03-28 00:57:18 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:57:21.816574 | orchestrator | 2026-03-28 00:57:21 | INFO  | Task e69e920c-198f-405e-b326-b9ac960ea778 is in state STARTED 2026-03-28 00:57:21.819291 | orchestrator | 2026-03-28 00:57:21 | INFO  | Task e4971d96-eea0-4612-bcb8-2ac73332beb4 is in state STARTED 2026-03-28 00:57:21.820040 | orchestrator | 2026-03-28 00:57:21 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:57:21.820130 | orchestrator | 2026-03-28 00:57:21 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:57:24.898251 | orchestrator | 2026-03-28 00:57:24 | INFO  | Task e69e920c-198f-405e-b326-b9ac960ea778 is in state STARTED 2026-03-28 00:57:24.900248 | orchestrator | 2026-03-28 00:57:24 | INFO  | Task e4971d96-eea0-4612-bcb8-2ac73332beb4 is in state STARTED 2026-03-28 00:57:24.906215 | orchestrator | 2026-03-28 00:57:24 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:57:24.906994 | orchestrator | 2026-03-28 00:57:24 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:57:27.948582 | orchestrator | 2026-03-28 00:57:27 | INFO  | Task e69e920c-198f-405e-b326-b9ac960ea778 is in state STARTED 2026-03-28 00:57:27.951533 | orchestrator | 2026-03-28 00:57:27 | INFO  | Task e4971d96-eea0-4612-bcb8-2ac73332beb4 is in state STARTED 2026-03-28 00:57:27.952565 | orchestrator | 2026-03-28 00:57:27 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:57:27.952700 | orchestrator | 2026-03-28 00:57:27 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:57:30.998282 | orchestrator | 2026-03-28 00:57:30 | INFO  | Task e69e920c-198f-405e-b326-b9ac960ea778 is in state STARTED 2026-03-28 00:57:31.000062 | orchestrator | 2026-03-28 00:57:31 | INFO  | Task e4971d96-eea0-4612-bcb8-2ac73332beb4 is in state STARTED 2026-03-28 00:57:31.001521 | orchestrator | 2026-03-28 00:57:31 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:57:31.001798 | orchestrator | 2026-03-28 00:57:31 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:57:34.045550 | orchestrator | 2026-03-28 00:57:34 | INFO  | Task e69e920c-198f-405e-b326-b9ac960ea778 is in state STARTED 2026-03-28 00:57:34.047710 | orchestrator | 2026-03-28 00:57:34 | INFO  | Task e4971d96-eea0-4612-bcb8-2ac73332beb4 is in state STARTED 2026-03-28 00:57:34.049445 | orchestrator | 2026-03-28 00:57:34 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:57:34.049665 | orchestrator | 2026-03-28 00:57:34 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:57:37.096971 | orchestrator | 2026-03-28 00:57:37 | INFO  | Task e69e920c-198f-405e-b326-b9ac960ea778 is in state STARTED 2026-03-28 00:57:37.098500 | orchestrator | 2026-03-28 00:57:37 | INFO  | Task e4971d96-eea0-4612-bcb8-2ac73332beb4 is in state STARTED 2026-03-28 00:57:37.099345 | orchestrator | 2026-03-28 00:57:37 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:57:37.099443 | orchestrator | 2026-03-28 00:57:37 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:57:40.141930 | orchestrator | 2026-03-28 00:57:40 | INFO  | Task e69e920c-198f-405e-b326-b9ac960ea778 is in state STARTED 2026-03-28 00:57:40.144579 | orchestrator | 2026-03-28 00:57:40 | INFO  | Task e4971d96-eea0-4612-bcb8-2ac73332beb4 is in state STARTED 2026-03-28 00:57:40.146239 | orchestrator | 2026-03-28 00:57:40 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:57:40.146665 | orchestrator | 2026-03-28 00:57:40 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:57:43.200859 | orchestrator | 2026-03-28 00:57:43 | INFO  | Task e69e920c-198f-405e-b326-b9ac960ea778 is in state STARTED 2026-03-28 00:57:43.203121 | orchestrator | 2026-03-28 00:57:43 | INFO  | Task e4971d96-eea0-4612-bcb8-2ac73332beb4 is in state STARTED 2026-03-28 00:57:43.205923 | orchestrator | 2026-03-28 00:57:43 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:57:43.206003 | orchestrator | 2026-03-28 00:57:43 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:57:46.250828 | orchestrator | 2026-03-28 00:57:46 | INFO  | Task e69e920c-198f-405e-b326-b9ac960ea778 is in state STARTED 2026-03-28 00:57:46.252359 | orchestrator | 2026-03-28 00:57:46 | INFO  | Task e4971d96-eea0-4612-bcb8-2ac73332beb4 is in state STARTED 2026-03-28 00:57:46.255086 | orchestrator | 2026-03-28 00:57:46 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:57:46.255114 | orchestrator | 2026-03-28 00:57:46 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:57:49.296120 | orchestrator | 2026-03-28 00:57:49 | INFO  | Task e69e920c-198f-405e-b326-b9ac960ea778 is in state STARTED 2026-03-28 00:57:49.297572 | orchestrator | 2026-03-28 00:57:49 | INFO  | Task e4971d96-eea0-4612-bcb8-2ac73332beb4 is in state STARTED 2026-03-28 00:57:49.301917 | orchestrator | 2026-03-28 00:57:49 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:57:49.301997 | orchestrator | 2026-03-28 00:57:49 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:57:52.355288 | orchestrator | 2026-03-28 00:57:52 | INFO  | Task e69e920c-198f-405e-b326-b9ac960ea778 is in state STARTED 2026-03-28 00:57:52.355532 | orchestrator | 2026-03-28 00:57:52 | INFO  | Task e4971d96-eea0-4612-bcb8-2ac73332beb4 is in state STARTED 2026-03-28 00:57:52.356365 | orchestrator | 2026-03-28 00:57:52 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:57:52.357437 | orchestrator | 2026-03-28 00:57:52 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:57:55.404021 | orchestrator | 2026-03-28 00:57:55 | INFO  | Task e69e920c-198f-405e-b326-b9ac960ea778 is in state STARTED 2026-03-28 00:57:55.405560 | orchestrator | 2026-03-28 00:57:55 | INFO  | Task e4971d96-eea0-4612-bcb8-2ac73332beb4 is in state STARTED 2026-03-28 00:57:55.407997 | orchestrator | 2026-03-28 00:57:55 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:57:55.408900 | orchestrator | 2026-03-28 00:57:55 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:57:58.465759 | orchestrator | 2026-03-28 00:57:58 | INFO  | Task e69e920c-198f-405e-b326-b9ac960ea778 is in state STARTED 2026-03-28 00:57:58.468551 | orchestrator | 2026-03-28 00:57:58 | INFO  | Task e4971d96-eea0-4612-bcb8-2ac73332beb4 is in state STARTED 2026-03-28 00:57:58.473353 | orchestrator | 2026-03-28 00:57:58 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:57:58.473473 | orchestrator | 2026-03-28 00:57:58 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:58:01.528063 | orchestrator | 2026-03-28 00:58:01 | INFO  | Task e69e920c-198f-405e-b326-b9ac960ea778 is in state STARTED 2026-03-28 00:58:01.530316 | orchestrator | 2026-03-28 00:58:01 | INFO  | Task e4971d96-eea0-4612-bcb8-2ac73332beb4 is in state STARTED 2026-03-28 00:58:01.532598 | orchestrator | 2026-03-28 00:58:01 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:58:01.532651 | orchestrator | 2026-03-28 00:58:01 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:58:04.582641 | orchestrator | 2026-03-28 00:58:04 | INFO  | Task e69e920c-198f-405e-b326-b9ac960ea778 is in state STARTED 2026-03-28 00:58:04.585197 | orchestrator | 2026-03-28 00:58:04 | INFO  | Task e4971d96-eea0-4612-bcb8-2ac73332beb4 is in state STARTED 2026-03-28 00:58:04.594645 | orchestrator | 2026-03-28 00:58:04 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:58:04.594724 | orchestrator | 2026-03-28 00:58:04 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:58:07.644416 | orchestrator | 2026-03-28 00:58:07 | INFO  | Task e69e920c-198f-405e-b326-b9ac960ea778 is in state STARTED 2026-03-28 00:58:07.648257 | orchestrator | 2026-03-28 00:58:07 | INFO  | Task e4971d96-eea0-4612-bcb8-2ac73332beb4 is in state STARTED 2026-03-28 00:58:07.652164 | orchestrator | 2026-03-28 00:58:07 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:58:07.652439 | orchestrator | 2026-03-28 00:58:07 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:58:10.705554 | orchestrator | 2026-03-28 00:58:10 | INFO  | Task e69e920c-198f-405e-b326-b9ac960ea778 is in state STARTED 2026-03-28 00:58:10.707430 | orchestrator | 2026-03-28 00:58:10 | INFO  | Task e4971d96-eea0-4612-bcb8-2ac73332beb4 is in state STARTED 2026-03-28 00:58:10.709131 | orchestrator | 2026-03-28 00:58:10 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:58:10.709181 | orchestrator | 2026-03-28 00:58:10 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:58:13.757449 | orchestrator | 2026-03-28 00:58:13 | INFO  | Task e69e920c-198f-405e-b326-b9ac960ea778 is in state STARTED 2026-03-28 00:58:13.759022 | orchestrator | 2026-03-28 00:58:13 | INFO  | Task e4971d96-eea0-4612-bcb8-2ac73332beb4 is in state STARTED 2026-03-28 00:58:13.760171 | orchestrator | 2026-03-28 00:58:13 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:58:13.760209 | orchestrator | 2026-03-28 00:58:13 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:58:16.809549 | orchestrator | 2026-03-28 00:58:16 | INFO  | Task e69e920c-198f-405e-b326-b9ac960ea778 is in state STARTED 2026-03-28 00:58:16.812003 | orchestrator | 2026-03-28 00:58:16 | INFO  | Task e4971d96-eea0-4612-bcb8-2ac73332beb4 is in state STARTED 2026-03-28 00:58:16.814378 | orchestrator | 2026-03-28 00:58:16 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:58:16.814412 | orchestrator | 2026-03-28 00:58:16 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:58:19.861292 | orchestrator | 2026-03-28 00:58:19 | INFO  | Task e69e920c-198f-405e-b326-b9ac960ea778 is in state STARTED 2026-03-28 00:58:19.862569 | orchestrator | 2026-03-28 00:58:19 | INFO  | Task e4971d96-eea0-4612-bcb8-2ac73332beb4 is in state STARTED 2026-03-28 00:58:19.866478 | orchestrator | 2026-03-28 00:58:19 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:58:19.866574 | orchestrator | 2026-03-28 00:58:19 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:58:22.906083 | orchestrator | 2026-03-28 00:58:22 | INFO  | Task e69e920c-198f-405e-b326-b9ac960ea778 is in state STARTED 2026-03-28 00:58:22.908123 | orchestrator | 2026-03-28 00:58:22 | INFO  | Task e4971d96-eea0-4612-bcb8-2ac73332beb4 is in state STARTED 2026-03-28 00:58:22.910100 | orchestrator | 2026-03-28 00:58:22 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:58:22.910149 | orchestrator | 2026-03-28 00:58:22 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:58:25.953927 | orchestrator | 2026-03-28 00:58:25 | INFO  | Task e69e920c-198f-405e-b326-b9ac960ea778 is in state STARTED 2026-03-28 00:58:25.956841 | orchestrator | 2026-03-28 00:58:25 | INFO  | Task e4971d96-eea0-4612-bcb8-2ac73332beb4 is in state STARTED 2026-03-28 00:58:25.960821 | orchestrator | 2026-03-28 00:58:25 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:58:25.960888 | orchestrator | 2026-03-28 00:58:25 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:58:29.006841 | orchestrator | 2026-03-28 00:58:29 | INFO  | Task e69e920c-198f-405e-b326-b9ac960ea778 is in state STARTED 2026-03-28 00:58:29.008618 | orchestrator | 2026-03-28 00:58:29 | INFO  | Task e4971d96-eea0-4612-bcb8-2ac73332beb4 is in state STARTED 2026-03-28 00:58:29.009977 | orchestrator | 2026-03-28 00:58:29 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:58:29.010212 | orchestrator | 2026-03-28 00:58:29 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:58:32.067673 | orchestrator | 2026-03-28 00:58:32 | INFO  | Task e69e920c-198f-405e-b326-b9ac960ea778 is in state STARTED 2026-03-28 00:58:32.070293 | orchestrator | 2026-03-28 00:58:32 | INFO  | Task e4971d96-eea0-4612-bcb8-2ac73332beb4 is in state STARTED 2026-03-28 00:58:32.073753 | orchestrator | 2026-03-28 00:58:32 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:58:32.073804 | orchestrator | 2026-03-28 00:58:32 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:58:35.131412 | orchestrator | 2026-03-28 00:58:35 | INFO  | Task e69e920c-198f-405e-b326-b9ac960ea778 is in state STARTED 2026-03-28 00:58:35.134909 | orchestrator | 2026-03-28 00:58:35 | INFO  | Task e4971d96-eea0-4612-bcb8-2ac73332beb4 is in state STARTED 2026-03-28 00:58:35.138979 | orchestrator | 2026-03-28 00:58:35 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:58:35.139015 | orchestrator | 2026-03-28 00:58:35 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:58:38.203391 | orchestrator | 2026-03-28 00:58:38 | INFO  | Task e69e920c-198f-405e-b326-b9ac960ea778 is in state STARTED 2026-03-28 00:58:38.205546 | orchestrator | 2026-03-28 00:58:38 | INFO  | Task e4971d96-eea0-4612-bcb8-2ac73332beb4 is in state STARTED 2026-03-28 00:58:38.208051 | orchestrator | 2026-03-28 00:58:38 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:58:38.208196 | orchestrator | 2026-03-28 00:58:38 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:58:41.250233 | orchestrator | 2026-03-28 00:58:41 | INFO  | Task e69e920c-198f-405e-b326-b9ac960ea778 is in state STARTED 2026-03-28 00:58:41.252365 | orchestrator | 2026-03-28 00:58:41 | INFO  | Task e4971d96-eea0-4612-bcb8-2ac73332beb4 is in state STARTED 2026-03-28 00:58:41.255855 | orchestrator | 2026-03-28 00:58:41 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:58:41.255891 | orchestrator | 2026-03-28 00:58:41 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:58:44.295618 | orchestrator | 2026-03-28 00:58:44 | INFO  | Task e69e920c-198f-405e-b326-b9ac960ea778 is in state STARTED 2026-03-28 00:58:44.297279 | orchestrator | 2026-03-28 00:58:44 | INFO  | Task e4971d96-eea0-4612-bcb8-2ac73332beb4 is in state STARTED 2026-03-28 00:58:44.298926 | orchestrator | 2026-03-28 00:58:44 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:58:44.298988 | orchestrator | 2026-03-28 00:58:44 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:58:47.343220 | orchestrator | 2026-03-28 00:58:47 | INFO  | Task e69e920c-198f-405e-b326-b9ac960ea778 is in state STARTED 2026-03-28 00:58:47.345769 | orchestrator | 2026-03-28 00:58:47 | INFO  | Task e4971d96-eea0-4612-bcb8-2ac73332beb4 is in state STARTED 2026-03-28 00:58:47.346842 | orchestrator | 2026-03-28 00:58:47 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:58:47.346883 | orchestrator | 2026-03-28 00:58:47 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:58:50.390060 | orchestrator | 2026-03-28 00:58:50 | INFO  | Task e69e920c-198f-405e-b326-b9ac960ea778 is in state STARTED 2026-03-28 00:58:50.391099 | orchestrator | 2026-03-28 00:58:50 | INFO  | Task e4971d96-eea0-4612-bcb8-2ac73332beb4 is in state STARTED 2026-03-28 00:58:50.391891 | orchestrator | 2026-03-28 00:58:50 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:58:50.392016 | orchestrator | 2026-03-28 00:58:50 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:58:53.452679 | orchestrator | 2026-03-28 00:58:53 | INFO  | Task e69e920c-198f-405e-b326-b9ac960ea778 is in state STARTED 2026-03-28 00:58:53.453905 | orchestrator | 2026-03-28 00:58:53 | INFO  | Task e4971d96-eea0-4612-bcb8-2ac73332beb4 is in state STARTED 2026-03-28 00:58:53.455521 | orchestrator | 2026-03-28 00:58:53 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:58:53.455596 | orchestrator | 2026-03-28 00:58:53 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:58:56.499471 | orchestrator | 2026-03-28 00:58:56 | INFO  | Task e69e920c-198f-405e-b326-b9ac960ea778 is in state STARTED 2026-03-28 00:58:56.502935 | orchestrator | 2026-03-28 00:58:56 | INFO  | Task e4971d96-eea0-4612-bcb8-2ac73332beb4 is in state STARTED 2026-03-28 00:58:56.505355 | orchestrator | 2026-03-28 00:58:56 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:58:56.505437 | orchestrator | 2026-03-28 00:58:56 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:58:59.546214 | orchestrator | 2026-03-28 00:58:59 | INFO  | Task e69e920c-198f-405e-b326-b9ac960ea778 is in state STARTED 2026-03-28 00:58:59.546844 | orchestrator | 2026-03-28 00:58:59 | INFO  | Task e4971d96-eea0-4612-bcb8-2ac73332beb4 is in state STARTED 2026-03-28 00:58:59.548042 | orchestrator | 2026-03-28 00:58:59 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:58:59.548094 | orchestrator | 2026-03-28 00:58:59 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:59:02.596425 | orchestrator | 2026-03-28 00:59:02 | INFO  | Task e69e920c-198f-405e-b326-b9ac960ea778 is in state STARTED 2026-03-28 00:59:02.597543 | orchestrator | 2026-03-28 00:59:02 | INFO  | Task e4971d96-eea0-4612-bcb8-2ac73332beb4 is in state STARTED 2026-03-28 00:59:02.599436 | orchestrator | 2026-03-28 00:59:02 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:59:02.599536 | orchestrator | 2026-03-28 00:59:02 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:59:05.649143 | orchestrator | 2026-03-28 00:59:05 | INFO  | Task e69e920c-198f-405e-b326-b9ac960ea778 is in state STARTED 2026-03-28 00:59:05.650482 | orchestrator | 2026-03-28 00:59:05 | INFO  | Task e4971d96-eea0-4612-bcb8-2ac73332beb4 is in state STARTED 2026-03-28 00:59:05.651186 | orchestrator | 2026-03-28 00:59:05 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:59:05.651218 | orchestrator | 2026-03-28 00:59:05 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:59:08.709838 | orchestrator | 2026-03-28 00:59:08 | INFO  | Task e69e920c-198f-405e-b326-b9ac960ea778 is in state STARTED 2026-03-28 00:59:08.713330 | orchestrator | 2026-03-28 00:59:08 | INFO  | Task e4971d96-eea0-4612-bcb8-2ac73332beb4 is in state SUCCESS 2026-03-28 00:59:08.717181 | orchestrator | 2026-03-28 00:59:08.717527 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-28 00:59:08.717550 | orchestrator | 2.16.14 2026-03-28 00:59:08.717563 | orchestrator | 2026-03-28 00:59:08.717574 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-03-28 00:59:08.717586 | orchestrator | 2026-03-28 00:59:08.717598 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-28 00:59:08.717609 | orchestrator | Saturday 28 March 2026 00:57:08 +0000 (0:00:00.643) 0:00:00.643 ******** 2026-03-28 00:59:08.717676 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:59:08.717693 | orchestrator | 2026-03-28 00:59:08.717707 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-28 00:59:08.717719 | orchestrator | Saturday 28 March 2026 00:57:09 +0000 (0:00:00.664) 0:00:01.307 ******** 2026-03-28 00:59:08.717732 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:59:08.717746 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:59:08.717759 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:59:08.717771 | orchestrator | 2026-03-28 00:59:08.717784 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-28 00:59:08.717797 | orchestrator | Saturday 28 March 2026 00:57:10 +0000 (0:00:01.067) 0:00:02.375 ******** 2026-03-28 00:59:08.717810 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:59:08.717823 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:59:08.717835 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:59:08.717849 | orchestrator | 2026-03-28 00:59:08.717862 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-28 00:59:08.717875 | orchestrator | Saturday 28 March 2026 00:57:10 +0000 (0:00:00.330) 0:00:02.705 ******** 2026-03-28 00:59:08.717888 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:59:08.717900 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:59:08.717912 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:59:08.717925 | orchestrator | 2026-03-28 00:59:08.717937 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-28 00:59:08.717950 | orchestrator | Saturday 28 March 2026 00:57:11 +0000 (0:00:00.834) 0:00:03.540 ******** 2026-03-28 00:59:08.717961 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:59:08.717972 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:59:08.717983 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:59:08.717993 | orchestrator | 2026-03-28 00:59:08.718004 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-28 00:59:08.718072 | orchestrator | Saturday 28 March 2026 00:57:11 +0000 (0:00:00.335) 0:00:03.875 ******** 2026-03-28 00:59:08.718088 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:59:08.718099 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:59:08.718110 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:59:08.718121 | orchestrator | 2026-03-28 00:59:08.718132 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-28 00:59:08.718143 | orchestrator | Saturday 28 March 2026 00:57:11 +0000 (0:00:00.296) 0:00:04.171 ******** 2026-03-28 00:59:08.718153 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:59:08.718164 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:59:08.718175 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:59:08.718186 | orchestrator | 2026-03-28 00:59:08.718197 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-28 00:59:08.718208 | orchestrator | Saturday 28 March 2026 00:57:12 +0000 (0:00:00.312) 0:00:04.484 ******** 2026-03-28 00:59:08.718230 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:59:08.718243 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:59:08.718254 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:59:08.718285 | orchestrator | 2026-03-28 00:59:08.718296 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-28 00:59:08.718307 | orchestrator | Saturday 28 March 2026 00:57:12 +0000 (0:00:00.528) 0:00:05.013 ******** 2026-03-28 00:59:08.718334 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:59:08.718346 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:59:08.718357 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:59:08.718382 | orchestrator | 2026-03-28 00:59:08.718404 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-28 00:59:08.718416 | orchestrator | Saturday 28 March 2026 00:57:13 +0000 (0:00:00.318) 0:00:05.331 ******** 2026-03-28 00:59:08.718427 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 00:59:08.718438 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 00:59:08.718459 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 00:59:08.718470 | orchestrator | 2026-03-28 00:59:08.718481 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-28 00:59:08.718493 | orchestrator | Saturday 28 March 2026 00:57:13 +0000 (0:00:00.659) 0:00:05.990 ******** 2026-03-28 00:59:08.718504 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:59:08.718514 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:59:08.718525 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:59:08.718536 | orchestrator | 2026-03-28 00:59:08.718550 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-28 00:59:08.718568 | orchestrator | Saturday 28 March 2026 00:57:14 +0000 (0:00:00.445) 0:00:06.436 ******** 2026-03-28 00:59:08.718588 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 00:59:08.718606 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 00:59:08.718624 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 00:59:08.718641 | orchestrator | 2026-03-28 00:59:08.718658 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-28 00:59:08.718675 | orchestrator | Saturday 28 March 2026 00:57:17 +0000 (0:00:03.187) 0:00:09.623 ******** 2026-03-28 00:59:08.718692 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-28 00:59:08.718712 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-28 00:59:08.718729 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-28 00:59:08.718746 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:59:08.718762 | orchestrator | 2026-03-28 00:59:08.718808 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-28 00:59:08.718831 | orchestrator | Saturday 28 March 2026 00:57:17 +0000 (0:00:00.431) 0:00:10.055 ******** 2026-03-28 00:59:08.718851 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-28 00:59:08.718873 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-28 00:59:08.718892 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-28 00:59:08.718910 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:59:08.718922 | orchestrator | 2026-03-28 00:59:08.718933 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-28 00:59:08.718943 | orchestrator | Saturday 28 March 2026 00:57:18 +0000 (0:00:00.877) 0:00:10.933 ******** 2026-03-28 00:59:08.718956 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 00:59:08.718970 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 00:59:08.719038 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-28 00:59:08.719051 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:59:08.719062 | orchestrator | 2026-03-28 00:59:08.719073 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-28 00:59:08.719093 | orchestrator | Saturday 28 March 2026 00:57:18 +0000 (0:00:00.171) 0:00:11.105 ******** 2026-03-28 00:59:08.719107 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '9deabbb35d8f', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-28 00:57:15.255429', 'end': '2026-03-28 00:57:15.285657', 'delta': '0:00:00.030228', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9deabbb35d8f'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-28 00:59:08.719123 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '3f104503cce9', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-28 00:57:16.314944', 'end': '2026-03-28 00:57:16.360943', 'delta': '0:00:00.045999', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['3f104503cce9'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-28 00:59:08.719146 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '698db66fc3b9', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-28 00:57:17.204731', 'end': '2026-03-28 00:57:17.242587', 'delta': '0:00:00.037856', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['698db66fc3b9'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-28 00:59:08.719159 | orchestrator | 2026-03-28 00:59:08.719170 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-28 00:59:08.719181 | orchestrator | Saturday 28 March 2026 00:57:19 +0000 (0:00:00.408) 0:00:11.513 ******** 2026-03-28 00:59:08.719192 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:59:08.719203 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:59:08.719214 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:59:08.719225 | orchestrator | 2026-03-28 00:59:08.719235 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-28 00:59:08.719246 | orchestrator | Saturday 28 March 2026 00:57:19 +0000 (0:00:00.467) 0:00:11.981 ******** 2026-03-28 00:59:08.719289 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-03-28 00:59:08.719308 | orchestrator | 2026-03-28 00:59:08.719319 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-28 00:59:08.719330 | orchestrator | Saturday 28 March 2026 00:57:21 +0000 (0:00:01.386) 0:00:13.367 ******** 2026-03-28 00:59:08.719351 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:59:08.719362 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:59:08.719373 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:59:08.719384 | orchestrator | 2026-03-28 00:59:08.719395 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-28 00:59:08.719405 | orchestrator | Saturday 28 March 2026 00:57:21 +0000 (0:00:00.310) 0:00:13.677 ******** 2026-03-28 00:59:08.719416 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:59:08.719427 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:59:08.719438 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:59:08.719448 | orchestrator | 2026-03-28 00:59:08.719459 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-28 00:59:08.719470 | orchestrator | Saturday 28 March 2026 00:57:21 +0000 (0:00:00.476) 0:00:14.154 ******** 2026-03-28 00:59:08.719480 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:59:08.719491 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:59:08.719502 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:59:08.719512 | orchestrator | 2026-03-28 00:59:08.719523 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-28 00:59:08.719534 | orchestrator | Saturday 28 March 2026 00:57:22 +0000 (0:00:00.597) 0:00:14.751 ******** 2026-03-28 00:59:08.719544 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:59:08.719555 | orchestrator | 2026-03-28 00:59:08.719565 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-28 00:59:08.719576 | orchestrator | Saturday 28 March 2026 00:57:22 +0000 (0:00:00.146) 0:00:14.898 ******** 2026-03-28 00:59:08.719587 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:59:08.719598 | orchestrator | 2026-03-28 00:59:08.719614 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-28 00:59:08.719626 | orchestrator | Saturday 28 March 2026 00:57:22 +0000 (0:00:00.265) 0:00:15.164 ******** 2026-03-28 00:59:08.719637 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:59:08.719647 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:59:08.719658 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:59:08.719669 | orchestrator | 2026-03-28 00:59:08.719679 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-28 00:59:08.719690 | orchestrator | Saturday 28 March 2026 00:57:23 +0000 (0:00:00.332) 0:00:15.497 ******** 2026-03-28 00:59:08.719700 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:59:08.719711 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:59:08.719722 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:59:08.719732 | orchestrator | 2026-03-28 00:59:08.719744 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-28 00:59:08.719764 | orchestrator | Saturday 28 March 2026 00:57:23 +0000 (0:00:00.317) 0:00:15.814 ******** 2026-03-28 00:59:08.719781 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:59:08.719800 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:59:08.719818 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:59:08.719836 | orchestrator | 2026-03-28 00:59:08.719854 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-28 00:59:08.719871 | orchestrator | Saturday 28 March 2026 00:57:24 +0000 (0:00:00.622) 0:00:16.437 ******** 2026-03-28 00:59:08.719888 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:59:08.719905 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:59:08.719924 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:59:08.719944 | orchestrator | 2026-03-28 00:59:08.719964 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-28 00:59:08.719984 | orchestrator | Saturday 28 March 2026 00:57:24 +0000 (0:00:00.380) 0:00:16.817 ******** 2026-03-28 00:59:08.720002 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:59:08.720023 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:59:08.720041 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:59:08.720060 | orchestrator | 2026-03-28 00:59:08.720076 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-28 00:59:08.720098 | orchestrator | Saturday 28 March 2026 00:57:24 +0000 (0:00:00.325) 0:00:17.143 ******** 2026-03-28 00:59:08.720109 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:59:08.720120 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:59:08.720131 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:59:08.720152 | orchestrator | 2026-03-28 00:59:08.720163 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-28 00:59:08.720174 | orchestrator | Saturday 28 March 2026 00:57:25 +0000 (0:00:00.336) 0:00:17.479 ******** 2026-03-28 00:59:08.720185 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:59:08.720195 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:59:08.720206 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:59:08.720217 | orchestrator | 2026-03-28 00:59:08.720227 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-28 00:59:08.720238 | orchestrator | Saturday 28 March 2026 00:57:25 +0000 (0:00:00.550) 0:00:18.030 ******** 2026-03-28 00:59:08.720251 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3eb28a65--49e9--527a--93b6--39f945444b2a-osd--block--3eb28a65--49e9--527a--93b6--39f945444b2a', 'dm-uuid-LVM-Tchacmkbltv1g8Xx5nMCBdnIbnCImJsIPRMECP12a16eAHm6yNrvtAvfv1MxnEUQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-28 00:59:08.720298 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8c246942--827f--54a7--8a08--735105fd2fd0-osd--block--8c246942--827f--54a7--8a08--735105fd2fd0', 'dm-uuid-LVM-HWw01x01ciwNdkzf1FFw2E1N5qxqftc4GlclHBtP5diew3a2C5nmBsr7tBLGXVnd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-28 00:59:08.720311 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:59:08.720331 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:59:08.720343 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:59:08.720354 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:59:08.720374 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:59:08.720393 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:59:08.720405 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--95774a3e--10f2--5c5c--866d--eaa2f6123896-osd--block--95774a3e--10f2--5c5c--866d--eaa2f6123896', 'dm-uuid-LVM-on9bNmqQdl6bqf2swm2eFjEqLh4NH46Ev4my3a3dstUeUyyjSITM8iDZj3AEZbI7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-28 00:59:08.720417 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:59:08.720428 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6126976c--050b--5515--8c81--fb3ee245975b-osd--block--6126976c--050b--5515--8c81--fb3ee245975b', 'dm-uuid-LVM-XwRfxGsnuoG51EkZS9WI1B6veK02hkwXdHKcvQh9ZJAmvIlWj4yHrj2qiTTQd77U'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-28 00:59:08.720439 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:59:08.720451 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:59:08.720462 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:59:08.720496 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c51dbd4-3dd9-4220-b480-983204e78537', 'scsi-SQEMU_QEMU_HARDDISK_3c51dbd4-3dd9-4220-b480-983204e78537'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c51dbd4-3dd9-4220-b480-983204e78537-part1', 'scsi-SQEMU_QEMU_HARDDISK_3c51dbd4-3dd9-4220-b480-983204e78537-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c51dbd4-3dd9-4220-b480-983204e78537-part14', 'scsi-SQEMU_QEMU_HARDDISK_3c51dbd4-3dd9-4220-b480-983204e78537-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c51dbd4-3dd9-4220-b480-983204e78537-part15', 'scsi-SQEMU_QEMU_HARDDISK_3c51dbd4-3dd9-4220-b480-983204e78537-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c51dbd4-3dd9-4220-b480-983204e78537-part16', 'scsi-SQEMU_QEMU_HARDDISK_3c51dbd4-3dd9-4220-b480-983204e78537-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 00:59:08.720546 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:59:08.720559 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--3eb28a65--49e9--527a--93b6--39f945444b2a-osd--block--3eb28a65--49e9--527a--93b6--39f945444b2a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Jn6V62-taHj-7NNl-DW6r-rQuJ-XtFr-BtDt29', 'scsi-0QEMU_QEMU_HARDDISK_78ac07d6-a998-431a-8632-f54c89645a8d', 'scsi-SQEMU_QEMU_HARDDISK_78ac07d6-a998-431a-8632-f54c89645a8d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 00:59:08.720577 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:59:08.720588 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--8c246942--827f--54a7--8a08--735105fd2fd0-osd--block--8c246942--827f--54a7--8a08--735105fd2fd0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GIfRsv-INbZ-xxrK-fLUV-EInY-JKfg-cHLsY4', 'scsi-0QEMU_QEMU_HARDDISK_af575ecf-0cf6-48aa-a1b6-43f16240ccad', 'scsi-SQEMU_QEMU_HARDDISK_af575ecf-0cf6-48aa-a1b6-43f16240ccad'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 00:59:08.720608 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:59:08.720627 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:59:08.720640 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d2c41d1e-c1aa-422a-bc56-ab0bbd118726', 'scsi-SQEMU_QEMU_HARDDISK_d2c41d1e-c1aa-422a-bc56-ab0bbd118726'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 00:59:08.720652 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:59:08.720663 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-00-03-24-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 00:59:08.720675 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:59:08.720686 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:59:08.720717 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb8cdf5a-61ca-4829-8f5a-ada391b02d40', 'scsi-SQEMU_QEMU_HARDDISK_eb8cdf5a-61ca-4829-8f5a-ada391b02d40'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb8cdf5a-61ca-4829-8f5a-ada391b02d40-part1', 'scsi-SQEMU_QEMU_HARDDISK_eb8cdf5a-61ca-4829-8f5a-ada391b02d40-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb8cdf5a-61ca-4829-8f5a-ada391b02d40-part14', 'scsi-SQEMU_QEMU_HARDDISK_eb8cdf5a-61ca-4829-8f5a-ada391b02d40-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb8cdf5a-61ca-4829-8f5a-ada391b02d40-part15', 'scsi-SQEMU_QEMU_HARDDISK_eb8cdf5a-61ca-4829-8f5a-ada391b02d40-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb8cdf5a-61ca-4829-8f5a-ada391b02d40-part16', 'scsi-SQEMU_QEMU_HARDDISK_eb8cdf5a-61ca-4829-8f5a-ada391b02d40-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 00:59:08.720738 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--95774a3e--10f2--5c5c--866d--eaa2f6123896-osd--block--95774a3e--10f2--5c5c--866d--eaa2f6123896'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-XX3svd-zCjt-Tult-1O5W-sL6T-2xD5-SPEEy7', 'scsi-0QEMU_QEMU_HARDDISK_0a0aea56-4050-4691-823a-d862fa48a59f', 'scsi-SQEMU_QEMU_HARDDISK_0a0aea56-4050-4691-823a-d862fa48a59f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 00:59:08.720751 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a9825c53--ea63--5cae--a5f7--e494f125bb8e-osd--block--a9825c53--ea63--5cae--a5f7--e494f125bb8e', 'dm-uuid-LVM-B4pMeiTrBM8rvX1vahFbOPL3qjpt1Q32fUdZkecXTUtglIbr9PLn8TGSmGxI4RpJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-28 00:59:08.720762 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--6126976c--050b--5515--8c81--fb3ee245975b-osd--block--6126976c--050b--5515--8c81--fb3ee245975b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-oy3YlD-QptT-9TfB-PYTV-Y3aA-qi23-moCusu', 'scsi-0QEMU_QEMU_HARDDISK_c165f4e4-c145-4cd5-8a4b-fe75c460abfb', 'scsi-SQEMU_QEMU_HARDDISK_c165f4e4-c145-4cd5-8a4b-fe75c460abfb'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 00:59:08.720779 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8fa92e37--9e8f--5bc1--86de--5e52e5346f3d-osd--block--8fa92e37--9e8f--5bc1--86de--5e52e5346f3d', 'dm-uuid-LVM-fa5e9cMh8YJv5YMVwd7Z0lDYFGaAUWE21iI9X68E0kjP8CuUyiEfHNG6pf8mWjS1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-28 00:59:08.720797 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:59:08.720814 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_edfefcfb-f0d2-43d0-b5b0-353b223cd811', 'scsi-SQEMU_QEMU_HARDDISK_edfefcfb-f0d2-43d0-b5b0-353b223cd811'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 00:59:08.720826 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:59:08.720837 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:59:08.720848 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-00-03-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 00:59:08.720859 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:59:08.720870 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:59:08.720882 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:59:08.720904 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:59:08.720916 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:59:08.720927 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-28 00:59:08.720947 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9304b03c-54d0-4df2-b114-2d3d3345c945', 'scsi-SQEMU_QEMU_HARDDISK_9304b03c-54d0-4df2-b114-2d3d3345c945'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9304b03c-54d0-4df2-b114-2d3d3345c945-part1', 'scsi-SQEMU_QEMU_HARDDISK_9304b03c-54d0-4df2-b114-2d3d3345c945-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9304b03c-54d0-4df2-b114-2d3d3345c945-part14', 'scsi-SQEMU_QEMU_HARDDISK_9304b03c-54d0-4df2-b114-2d3d3345c945-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9304b03c-54d0-4df2-b114-2d3d3345c945-part15', 'scsi-SQEMU_QEMU_HARDDISK_9304b03c-54d0-4df2-b114-2d3d3345c945-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9304b03c-54d0-4df2-b114-2d3d3345c945-part16', 'scsi-SQEMU_QEMU_HARDDISK_9304b03c-54d0-4df2-b114-2d3d3345c945-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 00:59:08.720960 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--a9825c53--ea63--5cae--a5f7--e494f125bb8e-osd--block--a9825c53--ea63--5cae--a5f7--e494f125bb8e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fQOx62-yoeg-BbRB-W0wg-1u6h-7as6-VoKrFG', 'scsi-0QEMU_QEMU_HARDDISK_616f32f6-becb-4ce1-b615-c2a0fbaca869', 'scsi-SQEMU_QEMU_HARDDISK_616f32f6-becb-4ce1-b615-c2a0fbaca869'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 00:59:08.720982 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--8fa92e37--9e8f--5bc1--86de--5e52e5346f3d-osd--block--8fa92e37--9e8f--5bc1--86de--5e52e5346f3d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-92lr24-Adml-wnIe-TNqU-A4d1-LbSX-xdGC5x', 'scsi-0QEMU_QEMU_HARDDISK_479351df-b417-42ac-b9cb-d6683c731815', 'scsi-SQEMU_QEMU_HARDDISK_479351df-b417-42ac-b9cb-d6683c731815'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 00:59:08.720994 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3670b387-e30b-4544-bca5-74e83387707d', 'scsi-SQEMU_QEMU_HARDDISK_3670b387-e30b-4544-bca5-74e83387707d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 00:59:08.721013 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-00-03-06-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-28 00:59:08.721025 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:59:08.721036 | orchestrator | 2026-03-28 00:59:08.721046 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-28 00:59:08.721057 | orchestrator | Saturday 28 March 2026 00:57:26 +0000 (0:00:00.634) 0:00:18.665 ******** 2026-03-28 00:59:08.721070 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3eb28a65--49e9--527a--93b6--39f945444b2a-osd--block--3eb28a65--49e9--527a--93b6--39f945444b2a', 'dm-uuid-LVM-Tchacmkbltv1g8Xx5nMCBdnIbnCImJsIPRMECP12a16eAHm6yNrvtAvfv1MxnEUQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:59:08.721082 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8c246942--827f--54a7--8a08--735105fd2fd0-osd--block--8c246942--827f--54a7--8a08--735105fd2fd0', 'dm-uuid-LVM-HWw01x01ciwNdkzf1FFw2E1N5qxqftc4GlclHBtP5diew3a2C5nmBsr7tBLGXVnd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:59:08.721105 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:59:08.721116 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:59:08.721128 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:59:08.721147 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:59:08.721159 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:59:08.721170 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:59:08.721182 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:59:08.721205 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:59:08.721225 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c51dbd4-3dd9-4220-b480-983204e78537', 'scsi-SQEMU_QEMU_HARDDISK_3c51dbd4-3dd9-4220-b480-983204e78537'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c51dbd4-3dd9-4220-b480-983204e78537-part1', 'scsi-SQEMU_QEMU_HARDDISK_3c51dbd4-3dd9-4220-b480-983204e78537-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c51dbd4-3dd9-4220-b480-983204e78537-part14', 'scsi-SQEMU_QEMU_HARDDISK_3c51dbd4-3dd9-4220-b480-983204e78537-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c51dbd4-3dd9-4220-b480-983204e78537-part15', 'scsi-SQEMU_QEMU_HARDDISK_3c51dbd4-3dd9-4220-b480-983204e78537-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c51dbd4-3dd9-4220-b480-983204e78537-part16', 'scsi-SQEMU_QEMU_HARDDISK_3c51dbd4-3dd9-4220-b480-983204e78537-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:59:08.721238 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--3eb28a65--49e9--527a--93b6--39f945444b2a-osd--block--3eb28a65--49e9--527a--93b6--39f945444b2a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Jn6V62-taHj-7NNl-DW6r-rQuJ-XtFr-BtDt29', 'scsi-0QEMU_QEMU_HARDDISK_78ac07d6-a998-431a-8632-f54c89645a8d', 'scsi-SQEMU_QEMU_HARDDISK_78ac07d6-a998-431a-8632-f54c89645a8d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:59:08.721284 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--95774a3e--10f2--5c5c--866d--eaa2f6123896-osd--block--95774a3e--10f2--5c5c--866d--eaa2f6123896', 'dm-uuid-LVM-on9bNmqQdl6bqf2swm2eFjEqLh4NH46Ev4my3a3dstUeUyyjSITM8iDZj3AEZbI7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:59:08.721298 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6126976c--050b--5515--8c81--fb3ee245975b-osd--block--6126976c--050b--5515--8c81--fb3ee245975b', 'dm-uuid-LVM-XwRfxGsnuoG51EkZS9WI1B6veK02hkwXdHKcvQh9ZJAmvIlWj4yHrj2qiTTQd77U'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:59:08.721316 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--8c246942--827f--54a7--8a08--735105fd2fd0-osd--block--8c246942--827f--54a7--8a08--735105fd2fd0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GIfRsv-INbZ-xxrK-fLUV-EInY-JKfg-cHLsY4', 'scsi-0QEMU_QEMU_HARDDISK_af575ecf-0cf6-48aa-a1b6-43f16240ccad', 'scsi-SQEMU_QEMU_HARDDISK_af575ecf-0cf6-48aa-a1b6-43f16240ccad'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:59:08.721328 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:59:08.721340 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d2c41d1e-c1aa-422a-bc56-ab0bbd118726', 'scsi-SQEMU_QEMU_HARDDISK_d2c41d1e-c1aa-422a-bc56-ab0bbd118726'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:59:08.721358 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:59:08.721377 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-00-03-24-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:59:08.721389 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:59:08.721400 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:59:08.721418 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:59:08.721430 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:59:08.721441 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:59:08.721459 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:59:08.721474 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:59:08.721494 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb8cdf5a-61ca-4829-8f5a-ada391b02d40', 'scsi-SQEMU_QEMU_HARDDISK_eb8cdf5a-61ca-4829-8f5a-ada391b02d40'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb8cdf5a-61ca-4829-8f5a-ada391b02d40-part1', 'scsi-SQEMU_QEMU_HARDDISK_eb8cdf5a-61ca-4829-8f5a-ada391b02d40-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb8cdf5a-61ca-4829-8f5a-ada391b02d40-part14', 'scsi-SQEMU_QEMU_HARDDISK_eb8cdf5a-61ca-4829-8f5a-ada391b02d40-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb8cdf5a-61ca-4829-8f5a-ada391b02d40-part15', 'scsi-SQEMU_QEMU_HARDDISK_eb8cdf5a-61ca-4829-8f5a-ada391b02d40-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb8cdf5a-61ca-4829-8f5a-ada391b02d40-part16', 'scsi-SQEMU_QEMU_HARDDISK_eb8cdf5a-61ca-4829-8f5a-ada391b02d40-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:59:08.721507 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a9825c53--ea63--5cae--a5f7--e494f125bb8e-osd--block--a9825c53--ea63--5cae--a5f7--e494f125bb8e', 'dm-uuid-LVM-B4pMeiTrBM8rvX1vahFbOPL3qjpt1Q32fUdZkecXTUtglIbr9PLn8TGSmGxI4RpJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:59:08.721530 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--95774a3e--10f2--5c5c--866d--eaa2f6123896-osd--block--95774a3e--10f2--5c5c--866d--eaa2f6123896'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-XX3svd-zCjt-Tult-1O5W-sL6T-2xD5-SPEEy7', 'scsi-0QEMU_QEMU_HARDDISK_0a0aea56-4050-4691-823a-d862fa48a59f', 'scsi-SQEMU_QEMU_HARDDISK_0a0aea56-4050-4691-823a-d862fa48a59f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:59:08.721542 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8fa92e37--9e8f--5bc1--86de--5e52e5346f3d-osd--block--8fa92e37--9e8f--5bc1--86de--5e52e5346f3d', 'dm-uuid-LVM-fa5e9cMh8YJv5YMVwd7Z0lDYFGaAUWE21iI9X68E0kjP8CuUyiEfHNG6pf8mWjS1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:59:08.721559 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--6126976c--050b--5515--8c81--fb3ee245975b-osd--block--6126976c--050b--5515--8c81--fb3ee245975b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-oy3YlD-QptT-9TfB-PYTV-Y3aA-qi23-moCusu', 'scsi-0QEMU_QEMU_HARDDISK_c165f4e4-c145-4cd5-8a4b-fe75c460abfb', 'scsi-SQEMU_QEMU_HARDDISK_c165f4e4-c145-4cd5-8a4b-fe75c460abfb'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:59:08.721571 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:59:08.721589 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_edfefcfb-f0d2-43d0-b5b0-353b223cd811', 'scsi-SQEMU_QEMU_HARDDISK_edfefcfb-f0d2-43d0-b5b0-353b223cd811'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:59:08.721600 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:59:08.721616 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-00-03-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:59:08.721627 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:59:08.721639 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:59:08.721656 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:59:08.721667 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:59:08.721686 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:59:08.721698 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:59:08.721715 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:59:08.721860 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9304b03c-54d0-4df2-b114-2d3d3345c945', 'scsi-SQEMU_QEMU_HARDDISK_9304b03c-54d0-4df2-b114-2d3d3345c945'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9304b03c-54d0-4df2-b114-2d3d3345c945-part1', 'scsi-SQEMU_QEMU_HARDDISK_9304b03c-54d0-4df2-b114-2d3d3345c945-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9304b03c-54d0-4df2-b114-2d3d3345c945-part14', 'scsi-SQEMU_QEMU_HARDDISK_9304b03c-54d0-4df2-b114-2d3d3345c945-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9304b03c-54d0-4df2-b114-2d3d3345c945-part15', 'scsi-SQEMU_QEMU_HARDDISK_9304b03c-54d0-4df2-b114-2d3d3345c945-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9304b03c-54d0-4df2-b114-2d3d3345c945-part16', 'scsi-SQEMU_QEMU_HARDDISK_9304b03c-54d0-4df2-b114-2d3d3345c945-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:59:08.721891 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--a9825c53--ea63--5cae--a5f7--e494f125bb8e-osd--block--a9825c53--ea63--5cae--a5f7--e494f125bb8e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fQOx62-yoeg-BbRB-W0wg-1u6h-7as6-VoKrFG', 'scsi-0QEMU_QEMU_HARDDISK_616f32f6-becb-4ce1-b615-c2a0fbaca869', 'scsi-SQEMU_QEMU_HARDDISK_616f32f6-becb-4ce1-b615-c2a0fbaca869'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:59:08.721909 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--8fa92e37--9e8f--5bc1--86de--5e52e5346f3d-osd--block--8fa92e37--9e8f--5bc1--86de--5e52e5346f3d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-92lr24-Adml-wnIe-TNqU-A4d1-LbSX-xdGC5x', 'scsi-0QEMU_QEMU_HARDDISK_479351df-b417-42ac-b9cb-d6683c731815', 'scsi-SQEMU_QEMU_HARDDISK_479351df-b417-42ac-b9cb-d6683c731815'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:59:08.721922 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3670b387-e30b-4544-bca5-74e83387707d', 'scsi-SQEMU_QEMU_HARDDISK_3670b387-e30b-4544-bca5-74e83387707d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:59:08.721941 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-28-00-03-06-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-28 00:59:08.721953 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:59:08.721964 | orchestrator | 2026-03-28 00:59:08.721975 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-28 00:59:08.721996 | orchestrator | Saturday 28 March 2026 00:57:27 +0000 (0:00:00.744) 0:00:19.410 ******** 2026-03-28 00:59:08.722008 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:59:08.722076 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:59:08.722088 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:59:08.722098 | orchestrator | 2026-03-28 00:59:08.722109 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-28 00:59:08.722120 | orchestrator | Saturday 28 March 2026 00:57:27 +0000 (0:00:00.706) 0:00:20.116 ******** 2026-03-28 00:59:08.722131 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:59:08.722142 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:59:08.722152 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:59:08.722163 | orchestrator | 2026-03-28 00:59:08.722174 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-28 00:59:08.722184 | orchestrator | Saturday 28 March 2026 00:57:28 +0000 (0:00:00.550) 0:00:20.666 ******** 2026-03-28 00:59:08.722195 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:59:08.722206 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:59:08.722216 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:59:08.722227 | orchestrator | 2026-03-28 00:59:08.722238 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-28 00:59:08.722248 | orchestrator | Saturday 28 March 2026 00:57:29 +0000 (0:00:00.687) 0:00:21.354 ******** 2026-03-28 00:59:08.722279 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:59:08.722291 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:59:08.722302 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:59:08.722313 | orchestrator | 2026-03-28 00:59:08.722324 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-28 00:59:08.722335 | orchestrator | Saturday 28 March 2026 00:57:29 +0000 (0:00:00.300) 0:00:21.654 ******** 2026-03-28 00:59:08.722346 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:59:08.722357 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:59:08.722368 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:59:08.722379 | orchestrator | 2026-03-28 00:59:08.722390 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-28 00:59:08.722401 | orchestrator | Saturday 28 March 2026 00:57:29 +0000 (0:00:00.421) 0:00:22.076 ******** 2026-03-28 00:59:08.722412 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:59:08.722423 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:59:08.722434 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:59:08.722444 | orchestrator | 2026-03-28 00:59:08.722456 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-28 00:59:08.722467 | orchestrator | Saturday 28 March 2026 00:57:30 +0000 (0:00:00.545) 0:00:22.621 ******** 2026-03-28 00:59:08.722478 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-28 00:59:08.722492 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-28 00:59:08.722505 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-28 00:59:08.722518 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-28 00:59:08.722531 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-28 00:59:08.722550 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-28 00:59:08.722563 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-28 00:59:08.722576 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-28 00:59:08.722588 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-28 00:59:08.722603 | orchestrator | 2026-03-28 00:59:08.722616 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-28 00:59:08.722627 | orchestrator | Saturday 28 March 2026 00:57:31 +0000 (0:00:00.873) 0:00:23.494 ******** 2026-03-28 00:59:08.722638 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-28 00:59:08.722649 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-28 00:59:08.722659 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-28 00:59:08.722670 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:59:08.722689 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-28 00:59:08.722700 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-28 00:59:08.722711 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-28 00:59:08.722722 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:59:08.722733 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-28 00:59:08.722744 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-28 00:59:08.722754 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-28 00:59:08.722765 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:59:08.722776 | orchestrator | 2026-03-28 00:59:08.722787 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-28 00:59:08.722798 | orchestrator | Saturday 28 March 2026 00:57:31 +0000 (0:00:00.373) 0:00:23.868 ******** 2026-03-28 00:59:08.722810 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 00:59:08.722821 | orchestrator | 2026-03-28 00:59:08.722833 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-28 00:59:08.722845 | orchestrator | Saturday 28 March 2026 00:57:32 +0000 (0:00:00.754) 0:00:24.623 ******** 2026-03-28 00:59:08.722863 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:59:08.722874 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:59:08.722885 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:59:08.722896 | orchestrator | 2026-03-28 00:59:08.722907 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-28 00:59:08.722918 | orchestrator | Saturday 28 March 2026 00:57:32 +0000 (0:00:00.342) 0:00:24.965 ******** 2026-03-28 00:59:08.722928 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:59:08.722939 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:59:08.722950 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:59:08.722961 | orchestrator | 2026-03-28 00:59:08.722972 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-28 00:59:08.722983 | orchestrator | Saturday 28 March 2026 00:57:33 +0000 (0:00:00.334) 0:00:25.300 ******** 2026-03-28 00:59:08.722993 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:59:08.723004 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:59:08.723014 | orchestrator | skipping: [testbed-node-5] 2026-03-28 00:59:08.723025 | orchestrator | 2026-03-28 00:59:08.723036 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-28 00:59:08.723046 | orchestrator | Saturday 28 March 2026 00:57:33 +0000 (0:00:00.331) 0:00:25.631 ******** 2026-03-28 00:59:08.723057 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:59:08.723068 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:59:08.723078 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:59:08.723088 | orchestrator | 2026-03-28 00:59:08.723099 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-28 00:59:08.723110 | orchestrator | Saturday 28 March 2026 00:57:34 +0000 (0:00:00.665) 0:00:26.296 ******** 2026-03-28 00:59:08.723120 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 00:59:08.723131 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 00:59:08.723141 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 00:59:08.723152 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:59:08.723163 | orchestrator | 2026-03-28 00:59:08.723173 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-28 00:59:08.723184 | orchestrator | Saturday 28 March 2026 00:57:34 +0000 (0:00:00.428) 0:00:26.725 ******** 2026-03-28 00:59:08.723195 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 00:59:08.723205 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 00:59:08.723216 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 00:59:08.723235 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:59:08.723246 | orchestrator | 2026-03-28 00:59:08.723280 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-28 00:59:08.723293 | orchestrator | Saturday 28 March 2026 00:57:34 +0000 (0:00:00.484) 0:00:27.210 ******** 2026-03-28 00:59:08.723303 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-28 00:59:08.723314 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-28 00:59:08.723325 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-28 00:59:08.723335 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:59:08.723346 | orchestrator | 2026-03-28 00:59:08.723356 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-28 00:59:08.723367 | orchestrator | Saturday 28 March 2026 00:57:35 +0000 (0:00:00.444) 0:00:27.655 ******** 2026-03-28 00:59:08.723378 | orchestrator | ok: [testbed-node-3] 2026-03-28 00:59:08.723389 | orchestrator | ok: [testbed-node-4] 2026-03-28 00:59:08.723399 | orchestrator | ok: [testbed-node-5] 2026-03-28 00:59:08.723410 | orchestrator | 2026-03-28 00:59:08.723420 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-28 00:59:08.723440 | orchestrator | Saturday 28 March 2026 00:57:35 +0000 (0:00:00.365) 0:00:28.021 ******** 2026-03-28 00:59:08.723451 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-28 00:59:08.723462 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-28 00:59:08.723473 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-28 00:59:08.723484 | orchestrator | 2026-03-28 00:59:08.723494 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-28 00:59:08.723505 | orchestrator | Saturday 28 March 2026 00:57:36 +0000 (0:00:00.527) 0:00:28.548 ******** 2026-03-28 00:59:08.723516 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 00:59:08.723527 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 00:59:08.723538 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 00:59:08.723548 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-28 00:59:08.723559 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-28 00:59:08.723570 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-28 00:59:08.723580 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-28 00:59:08.723591 | orchestrator | 2026-03-28 00:59:08.723602 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-28 00:59:08.723613 | orchestrator | Saturday 28 March 2026 00:57:37 +0000 (0:00:01.135) 0:00:29.684 ******** 2026-03-28 00:59:08.723623 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-28 00:59:08.723635 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-28 00:59:08.723646 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-28 00:59:08.723656 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-28 00:59:08.723667 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-28 00:59:08.723678 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-28 00:59:08.723695 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-28 00:59:08.723706 | orchestrator | 2026-03-28 00:59:08.723717 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-03-28 00:59:08.723727 | orchestrator | Saturday 28 March 2026 00:57:39 +0000 (0:00:02.147) 0:00:31.831 ******** 2026-03-28 00:59:08.723738 | orchestrator | skipping: [testbed-node-3] 2026-03-28 00:59:08.723748 | orchestrator | skipping: [testbed-node-4] 2026-03-28 00:59:08.723759 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-03-28 00:59:08.723778 | orchestrator | 2026-03-28 00:59:08.723789 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-03-28 00:59:08.723799 | orchestrator | Saturday 28 March 2026 00:57:40 +0000 (0:00:00.446) 0:00:32.278 ******** 2026-03-28 00:59:08.723812 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-28 00:59:08.723823 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-28 00:59:08.723834 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-28 00:59:08.723845 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-28 00:59:08.723856 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-28 00:59:08.723867 | orchestrator | 2026-03-28 00:59:08.723878 | orchestrator | TASK [generate keys] *********************************************************** 2026-03-28 00:59:08.723889 | orchestrator | Saturday 28 March 2026 00:58:21 +0000 (0:00:40.981) 0:01:13.259 ******** 2026-03-28 00:59:08.723899 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 00:59:08.723910 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 00:59:08.723921 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 00:59:08.723936 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 00:59:08.723947 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 00:59:08.723958 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 00:59:08.723969 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-03-28 00:59:08.723980 | orchestrator | 2026-03-28 00:59:08.723991 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-03-28 00:59:08.724001 | orchestrator | Saturday 28 March 2026 00:58:39 +0000 (0:00:18.000) 0:01:31.260 ******** 2026-03-28 00:59:08.724012 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 00:59:08.724027 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 00:59:08.724045 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 00:59:08.724063 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 00:59:08.724081 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 00:59:08.724097 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 00:59:08.724115 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-28 00:59:08.724135 | orchestrator | 2026-03-28 00:59:08.724154 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-03-28 00:59:08.724186 | orchestrator | Saturday 28 March 2026 00:58:48 +0000 (0:00:09.231) 0:01:40.492 ******** 2026-03-28 00:59:08.724204 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 00:59:08.724223 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-28 00:59:08.724241 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-28 00:59:08.724328 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 00:59:08.724350 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-28 00:59:08.724380 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-28 00:59:08.724397 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 00:59:08.724414 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-28 00:59:08.724431 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-28 00:59:08.724447 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 00:59:08.724464 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-28 00:59:08.724482 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-28 00:59:08.724500 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 00:59:08.724517 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-28 00:59:08.724532 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-28 00:59:08.724550 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-28 00:59:08.724569 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-28 00:59:08.724585 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-28 00:59:08.724601 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-03-28 00:59:08.724618 | orchestrator | 2026-03-28 00:59:08.724636 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:59:08.724654 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-28 00:59:08.724675 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-03-28 00:59:08.724695 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-28 00:59:08.724713 | orchestrator | 2026-03-28 00:59:08.724730 | orchestrator | 2026-03-28 00:59:08.724748 | orchestrator | 2026-03-28 00:59:08.724766 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:59:08.724785 | orchestrator | Saturday 28 March 2026 00:59:06 +0000 (0:00:17.784) 0:01:58.276 ******** 2026-03-28 00:59:08.724804 | orchestrator | =============================================================================== 2026-03-28 00:59:08.724820 | orchestrator | create openstack pool(s) ----------------------------------------------- 40.98s 2026-03-28 00:59:08.724837 | orchestrator | generate keys ---------------------------------------------------------- 18.00s 2026-03-28 00:59:08.724854 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.78s 2026-03-28 00:59:08.724870 | orchestrator | get keys from monitors -------------------------------------------------- 9.23s 2026-03-28 00:59:08.724886 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 3.19s 2026-03-28 00:59:08.724898 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.15s 2026-03-28 00:59:08.724917 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.39s 2026-03-28 00:59:08.724939 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.14s 2026-03-28 00:59:08.724949 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 1.07s 2026-03-28 00:59:08.724958 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.88s 2026-03-28 00:59:08.724967 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.87s 2026-03-28 00:59:08.724977 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.83s 2026-03-28 00:59:08.724986 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.75s 2026-03-28 00:59:08.724996 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.74s 2026-03-28 00:59:08.725005 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.71s 2026-03-28 00:59:08.725015 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.69s 2026-03-28 00:59:08.725024 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.67s 2026-03-28 00:59:08.725033 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.66s 2026-03-28 00:59:08.725043 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.66s 2026-03-28 00:59:08.725058 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.63s 2026-03-28 00:59:08.725073 | orchestrator | 2026-03-28 00:59:08 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:59:08.725090 | orchestrator | 2026-03-28 00:59:08 | INFO  | Task 2a909990-c6d3-4ebe-8869-6189aade0c2b is in state STARTED 2026-03-28 00:59:08.725106 | orchestrator | 2026-03-28 00:59:08 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:59:11.781734 | orchestrator | 2026-03-28 00:59:11 | INFO  | Task e69e920c-198f-405e-b326-b9ac960ea778 is in state STARTED 2026-03-28 00:59:11.783440 | orchestrator | 2026-03-28 00:59:11 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:59:11.785842 | orchestrator | 2026-03-28 00:59:11 | INFO  | Task 2a909990-c6d3-4ebe-8869-6189aade0c2b is in state STARTED 2026-03-28 00:59:11.786548 | orchestrator | 2026-03-28 00:59:11 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:59:14.833489 | orchestrator | 2026-03-28 00:59:14 | INFO  | Task e69e920c-198f-405e-b326-b9ac960ea778 is in state STARTED 2026-03-28 00:59:14.836375 | orchestrator | 2026-03-28 00:59:14 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:59:14.840695 | orchestrator | 2026-03-28 00:59:14 | INFO  | Task 2a909990-c6d3-4ebe-8869-6189aade0c2b is in state STARTED 2026-03-28 00:59:14.840772 | orchestrator | 2026-03-28 00:59:14 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:59:17.890725 | orchestrator | 2026-03-28 00:59:17 | INFO  | Task e69e920c-198f-405e-b326-b9ac960ea778 is in state STARTED 2026-03-28 00:59:17.890967 | orchestrator | 2026-03-28 00:59:17 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:59:17.893067 | orchestrator | 2026-03-28 00:59:17 | INFO  | Task 2a909990-c6d3-4ebe-8869-6189aade0c2b is in state STARTED 2026-03-28 00:59:17.893145 | orchestrator | 2026-03-28 00:59:17 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:59:20.938209 | orchestrator | 2026-03-28 00:59:20 | INFO  | Task e69e920c-198f-405e-b326-b9ac960ea778 is in state STARTED 2026-03-28 00:59:20.942106 | orchestrator | 2026-03-28 00:59:20 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:59:20.944400 | orchestrator | 2026-03-28 00:59:20 | INFO  | Task 2a909990-c6d3-4ebe-8869-6189aade0c2b is in state STARTED 2026-03-28 00:59:20.944474 | orchestrator | 2026-03-28 00:59:20 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:59:23.998462 | orchestrator | 2026-03-28 00:59:23 | INFO  | Task e69e920c-198f-405e-b326-b9ac960ea778 is in state STARTED 2026-03-28 00:59:24.000015 | orchestrator | 2026-03-28 00:59:24 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:59:24.003451 | orchestrator | 2026-03-28 00:59:24 | INFO  | Task 2a909990-c6d3-4ebe-8869-6189aade0c2b is in state STARTED 2026-03-28 00:59:24.003634 | orchestrator | 2026-03-28 00:59:24 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:59:27.058443 | orchestrator | 2026-03-28 00:59:27 | INFO  | Task e69e920c-198f-405e-b326-b9ac960ea778 is in state STARTED 2026-03-28 00:59:27.065086 | orchestrator | 2026-03-28 00:59:27 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:59:27.065180 | orchestrator | 2026-03-28 00:59:27 | INFO  | Task 2a909990-c6d3-4ebe-8869-6189aade0c2b is in state STARTED 2026-03-28 00:59:27.065204 | orchestrator | 2026-03-28 00:59:27 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:59:30.102576 | orchestrator | 2026-03-28 00:59:30 | INFO  | Task e69e920c-198f-405e-b326-b9ac960ea778 is in state SUCCESS 2026-03-28 00:59:30.103819 | orchestrator | 2026-03-28 00:59:30.103847 | orchestrator | 2026-03-28 00:59:30.103856 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 00:59:30.103863 | orchestrator | 2026-03-28 00:59:30.103870 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 00:59:30.103878 | orchestrator | Saturday 28 March 2026 00:56:15 +0000 (0:00:00.339) 0:00:00.339 ******** 2026-03-28 00:59:30.103885 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:59:30.103892 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:59:30.103899 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:59:30.103905 | orchestrator | 2026-03-28 00:59:30.103912 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 00:59:30.103919 | orchestrator | Saturday 28 March 2026 00:56:16 +0000 (0:00:00.356) 0:00:00.696 ******** 2026-03-28 00:59:30.103927 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-03-28 00:59:30.103934 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-03-28 00:59:30.103940 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-03-28 00:59:30.103947 | orchestrator | 2026-03-28 00:59:30.103954 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-03-28 00:59:30.103960 | orchestrator | 2026-03-28 00:59:30.103967 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-28 00:59:30.103973 | orchestrator | Saturday 28 March 2026 00:56:16 +0000 (0:00:00.317) 0:00:01.013 ******** 2026-03-28 00:59:30.103980 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:59:30.103986 | orchestrator | 2026-03-28 00:59:30.103993 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-03-28 00:59:30.103999 | orchestrator | Saturday 28 March 2026 00:56:17 +0000 (0:00:00.660) 0:00:01.674 ******** 2026-03-28 00:59:30.104006 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-28 00:59:30.104013 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-28 00:59:30.104019 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-28 00:59:30.104026 | orchestrator | 2026-03-28 00:59:30.104032 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-03-28 00:59:30.104039 | orchestrator | Saturday 28 March 2026 00:56:18 +0000 (0:00:01.124) 0:00:02.798 ******** 2026-03-28 00:59:30.104048 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 00:59:30.104080 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 00:59:30.104111 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 00:59:30.104122 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-28 00:59:30.104130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-28 00:59:30.104145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-28 00:59:30.104153 | orchestrator | 2026-03-28 00:59:30.104160 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-28 00:59:30.104166 | orchestrator | Saturday 28 March 2026 00:56:19 +0000 (0:00:01.342) 0:00:04.141 ******** 2026-03-28 00:59:30.104176 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:59:30.104183 | orchestrator | 2026-03-28 00:59:30.104190 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-03-28 00:59:30.104201 | orchestrator | Saturday 28 March 2026 00:56:20 +0000 (0:00:00.619) 0:00:04.760 ******** 2026-03-28 00:59:30.104400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 00:59:30.104410 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 00:59:30.104445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 00:59:30.104453 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-28 00:59:30.104472 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-28 00:59:30.104481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-28 00:59:30.104493 | orchestrator | 2026-03-28 00:59:30.104500 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-03-28 00:59:30.104507 | orchestrator | Saturday 28 March 2026 00:56:23 +0000 (0:00:02.980) 0:00:07.741 ******** 2026-03-28 00:59:30.104514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 00:59:30.104531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-28 00:59:30.104539 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:59:30.104547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 00:59:30.104559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 00:59:30.104566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-28 00:59:30.104573 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:59:30.104589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-28 00:59:30.104597 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:59:30.104604 | orchestrator | 2026-03-28 00:59:30.104610 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-03-28 00:59:30.104617 | orchestrator | Saturday 28 March 2026 00:56:24 +0000 (0:00:01.147) 0:00:08.889 ******** 2026-03-28 00:59:30.104624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 00:59:30.104636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-28 00:59:30.104643 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:59:30.104651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 00:59:30.104667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-28 00:59:30.104675 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:59:30.104682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 00:59:30.104694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-28 00:59:30.104701 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:59:30.104708 | orchestrator | 2026-03-28 00:59:30.104715 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-03-28 00:59:30.104722 | orchestrator | Saturday 28 March 2026 00:56:25 +0000 (0:00:01.168) 0:00:10.057 ******** 2026-03-28 00:59:30.104729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 00:59:30.104746 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 00:59:30.104762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 00:59:30.104769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-28 00:59:30.104777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-28 00:59:30.104794 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-28 00:59:30.104806 | orchestrator | 2026-03-28 00:59:30.104813 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-03-28 00:59:30.104820 | orchestrator | Saturday 28 March 2026 00:56:28 +0000 (0:00:02.762) 0:00:12.820 ******** 2026-03-28 00:59:30.104826 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:59:30.104833 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:59:30.104840 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:59:30.104846 | orchestrator | 2026-03-28 00:59:30.104853 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-03-28 00:59:30.104859 | orchestrator | Saturday 28 March 2026 00:56:31 +0000 (0:00:02.911) 0:00:15.731 ******** 2026-03-28 00:59:30.104866 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:59:30.104872 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:59:30.104879 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:59:30.104886 | orchestrator | 2026-03-28 00:59:30.104892 | orchestrator | TASK [service-check-containers : opensearch | Check containers] **************** 2026-03-28 00:59:30.104899 | orchestrator | Saturday 28 March 2026 00:56:33 +0000 (0:00:01.670) 0:00:17.402 ******** 2026-03-28 00:59:30.104905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 00:59:30.104913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 00:59:30.104924 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 00:59:30.104940 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-28 00:59:30.104948 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-28 00:59:30.104955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-28 00:59:30.104963 | orchestrator | 2026-03-28 00:59:30.104969 | orchestrator | TASK [service-check-containers : opensearch | Notify handlers to restart containers] *** 2026-03-28 00:59:30.104976 | orchestrator | Saturday 28 March 2026 00:56:35 +0000 (0:00:02.345) 0:00:19.748 ******** 2026-03-28 00:59:30.104983 | orchestrator | changed: [testbed-node-0] => { 2026-03-28 00:59:30.104994 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 00:59:30.105001 | orchestrator | } 2026-03-28 00:59:30.105009 | orchestrator | changed: [testbed-node-1] => { 2026-03-28 00:59:30.105016 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 00:59:30.105024 | orchestrator | } 2026-03-28 00:59:30.105031 | orchestrator | changed: [testbed-node-2] => { 2026-03-28 00:59:30.105042 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 00:59:30.105050 | orchestrator | } 2026-03-28 00:59:30.105058 | orchestrator | 2026-03-28 00:59:30.105065 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-28 00:59:30.105075 | orchestrator | Saturday 28 March 2026 00:56:36 +0000 (0:00:00.633) 0:00:20.381 ******** 2026-03-28 00:59:30.105084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 00:59:30.105092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-28 00:59:30.105101 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:59:30.105109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 00:59:30.105124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-28 00:59:30.105137 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:59:30.105144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 00:59:30.105151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-28 00:59:30.105158 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:59:30.105165 | orchestrator | 2026-03-28 00:59:30.105172 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-28 00:59:30.105178 | orchestrator | Saturday 28 March 2026 00:56:36 +0000 (0:00:00.865) 0:00:21.247 ******** 2026-03-28 00:59:30.105185 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:59:30.105191 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:59:30.105198 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:59:30.105204 | orchestrator | 2026-03-28 00:59:30.105211 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-28 00:59:30.105217 | orchestrator | Saturday 28 March 2026 00:56:37 +0000 (0:00:00.331) 0:00:21.578 ******** 2026-03-28 00:59:30.105288 | orchestrator | 2026-03-28 00:59:30.105298 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-28 00:59:30.105305 | orchestrator | Saturday 28 March 2026 00:56:37 +0000 (0:00:00.068) 0:00:21.647 ******** 2026-03-28 00:59:30.105317 | orchestrator | 2026-03-28 00:59:30.105324 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-28 00:59:30.105330 | orchestrator | Saturday 28 March 2026 00:56:37 +0000 (0:00:00.068) 0:00:21.716 ******** 2026-03-28 00:59:30.105337 | orchestrator | 2026-03-28 00:59:30.105343 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-03-28 00:59:30.105350 | orchestrator | Saturday 28 March 2026 00:56:37 +0000 (0:00:00.281) 0:00:21.998 ******** 2026-03-28 00:59:30.105356 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:59:30.105363 | orchestrator | 2026-03-28 00:59:30.105370 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-03-28 00:59:30.105376 | orchestrator | Saturday 28 March 2026 00:56:37 +0000 (0:00:00.227) 0:00:22.225 ******** 2026-03-28 00:59:30.105383 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:59:30.105390 | orchestrator | 2026-03-28 00:59:30.105396 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-03-28 00:59:30.105403 | orchestrator | Saturday 28 March 2026 00:56:38 +0000 (0:00:00.231) 0:00:22.456 ******** 2026-03-28 00:59:30.105409 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:59:30.105416 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:59:30.105423 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:59:30.105430 | orchestrator | 2026-03-28 00:59:30.105436 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-03-28 00:59:30.105443 | orchestrator | Saturday 28 March 2026 00:57:51 +0000 (0:01:13.607) 0:01:36.064 ******** 2026-03-28 00:59:30.105449 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:59:30.105456 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:59:30.105462 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:59:30.105469 | orchestrator | 2026-03-28 00:59:30.105479 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-28 00:59:30.105486 | orchestrator | Saturday 28 March 2026 00:59:14 +0000 (0:01:22.482) 0:02:58.546 ******** 2026-03-28 00:59:30.105497 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:59:30.105504 | orchestrator | 2026-03-28 00:59:30.105511 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-03-28 00:59:30.105517 | orchestrator | Saturday 28 March 2026 00:59:15 +0000 (0:00:00.796) 0:02:59.342 ******** 2026-03-28 00:59:30.105524 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:59:30.105531 | orchestrator | 2026-03-28 00:59:30.105609 | orchestrator | TASK [opensearch : Wait for OpenSearch cluster to become healthy] ************** 2026-03-28 00:59:30.105616 | orchestrator | Saturday 28 March 2026 00:59:17 +0000 (0:00:02.908) 0:03:02.250 ******** 2026-03-28 00:59:30.105623 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:59:30.105630 | orchestrator | 2026-03-28 00:59:30.105636 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-03-28 00:59:30.105643 | orchestrator | Saturday 28 March 2026 00:59:20 +0000 (0:00:02.411) 0:03:04.662 ******** 2026-03-28 00:59:30.105649 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:59:30.105656 | orchestrator | 2026-03-28 00:59:30.105662 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-03-28 00:59:30.105669 | orchestrator | Saturday 28 March 2026 00:59:22 +0000 (0:00:02.307) 0:03:06.969 ******** 2026-03-28 00:59:30.105675 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:59:30.105682 | orchestrator | 2026-03-28 00:59:30.105689 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-03-28 00:59:30.105695 | orchestrator | Saturday 28 March 2026 00:59:25 +0000 (0:00:03.056) 0:03:10.025 ******** 2026-03-28 00:59:30.105702 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:59:30.105708 | orchestrator | 2026-03-28 00:59:30.105715 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:59:30.105723 | orchestrator | testbed-node-0 : ok=20  changed=12  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-28 00:59:30.105736 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-28 00:59:30.105743 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-28 00:59:30.105749 | orchestrator | 2026-03-28 00:59:30.105756 | orchestrator | 2026-03-28 00:59:30.105763 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:59:30.105769 | orchestrator | Saturday 28 March 2026 00:59:28 +0000 (0:00:02.855) 0:03:12.881 ******** 2026-03-28 00:59:30.105776 | orchestrator | =============================================================================== 2026-03-28 00:59:30.105782 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 82.48s 2026-03-28 00:59:30.105789 | orchestrator | opensearch : Restart opensearch container ------------------------------ 73.61s 2026-03-28 00:59:30.105795 | orchestrator | opensearch : Create new log retention policy ---------------------------- 3.06s 2026-03-28 00:59:30.105802 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.98s 2026-03-28 00:59:30.105808 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.91s 2026-03-28 00:59:30.105815 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.91s 2026-03-28 00:59:30.105821 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.86s 2026-03-28 00:59:30.105828 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.76s 2026-03-28 00:59:30.105834 | orchestrator | opensearch : Wait for OpenSearch cluster to become healthy -------------- 2.41s 2026-03-28 00:59:30.105841 | orchestrator | service-check-containers : opensearch | Check containers ---------------- 2.35s 2026-03-28 00:59:30.105847 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.31s 2026-03-28 00:59:30.105854 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.67s 2026-03-28 00:59:30.105860 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.34s 2026-03-28 00:59:30.105867 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.17s 2026-03-28 00:59:30.105873 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.15s 2026-03-28 00:59:30.105880 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 1.12s 2026-03-28 00:59:30.105886 | orchestrator | service-check-containers : Include tasks -------------------------------- 0.87s 2026-03-28 00:59:30.105893 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.80s 2026-03-28 00:59:30.105899 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.66s 2026-03-28 00:59:30.105906 | orchestrator | service-check-containers : opensearch | Notify handlers to restart containers --- 0.63s 2026-03-28 00:59:30.105916 | orchestrator | 2026-03-28 00:59:30 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:59:30.108758 | orchestrator | 2026-03-28 00:59:30 | INFO  | Task 2a909990-c6d3-4ebe-8869-6189aade0c2b is in state STARTED 2026-03-28 00:59:30.109033 | orchestrator | 2026-03-28 00:59:30 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:59:33.160994 | orchestrator | 2026-03-28 00:59:33 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:59:33.161650 | orchestrator | 2026-03-28 00:59:33 | INFO  | Task 2a909990-c6d3-4ebe-8869-6189aade0c2b is in state STARTED 2026-03-28 00:59:33.161726 | orchestrator | 2026-03-28 00:59:33 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:59:36.211471 | orchestrator | 2026-03-28 00:59:36 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:59:36.213871 | orchestrator | 2026-03-28 00:59:36 | INFO  | Task 2a909990-c6d3-4ebe-8869-6189aade0c2b is in state STARTED 2026-03-28 00:59:36.214076 | orchestrator | 2026-03-28 00:59:36 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:59:39.254486 | orchestrator | 2026-03-28 00:59:39 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:59:39.255148 | orchestrator | 2026-03-28 00:59:39 | INFO  | Task 2a909990-c6d3-4ebe-8869-6189aade0c2b is in state STARTED 2026-03-28 00:59:39.255196 | orchestrator | 2026-03-28 00:59:39 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:59:42.301001 | orchestrator | 2026-03-28 00:59:42 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:59:42.303977 | orchestrator | 2026-03-28 00:59:42 | INFO  | Task 2a909990-c6d3-4ebe-8869-6189aade0c2b is in state STARTED 2026-03-28 00:59:42.304054 | orchestrator | 2026-03-28 00:59:42 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:59:45.337581 | orchestrator | 2026-03-28 00:59:45 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:59:45.340423 | orchestrator | 2026-03-28 00:59:45 | INFO  | Task 2a909990-c6d3-4ebe-8869-6189aade0c2b is in state STARTED 2026-03-28 00:59:45.340486 | orchestrator | 2026-03-28 00:59:45 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:59:48.380662 | orchestrator | 2026-03-28 00:59:48 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state STARTED 2026-03-28 00:59:48.381277 | orchestrator | 2026-03-28 00:59:48 | INFO  | Task 2a909990-c6d3-4ebe-8869-6189aade0c2b is in state SUCCESS 2026-03-28 00:59:48.381314 | orchestrator | 2026-03-28 00:59:48 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:59:51.449881 | orchestrator | 2026-03-28 00:59:51 | INFO  | Task c29a5fb8-494a-44a8-a278-f079d396a5a6 is in state SUCCESS 2026-03-28 00:59:51.450769 | orchestrator | 2026-03-28 00:59:51.450816 | orchestrator | 2026-03-28 00:59:51.450821 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-03-28 00:59:51.450840 | orchestrator | 2026-03-28 00:59:51.450845 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-03-28 00:59:51.450857 | orchestrator | Saturday 28 March 2026 00:59:10 +0000 (0:00:00.249) 0:00:00.249 ******** 2026-03-28 00:59:51.450866 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-28 00:59:51.450874 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-28 00:59:51.450880 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-28 00:59:51.450887 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-28 00:59:51.450893 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-28 00:59:51.450901 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-28 00:59:51.450906 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-28 00:59:51.450909 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-28 00:59:51.450913 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-28 00:59:51.450917 | orchestrator | 2026-03-28 00:59:51.450921 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-03-28 00:59:51.450925 | orchestrator | Saturday 28 March 2026 00:59:15 +0000 (0:00:05.242) 0:00:05.492 ******** 2026-03-28 00:59:51.450929 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-28 00:59:51.450933 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-28 00:59:51.450957 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-28 00:59:51.450964 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-28 00:59:51.450970 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-28 00:59:51.450976 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-28 00:59:51.450995 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-28 00:59:51.451001 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-28 00:59:51.451007 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-28 00:59:51.451012 | orchestrator | 2026-03-28 00:59:51.451018 | orchestrator | TASK [Create share directory] ************************************************** 2026-03-28 00:59:51.451024 | orchestrator | Saturday 28 March 2026 00:59:19 +0000 (0:00:04.514) 0:00:10.007 ******** 2026-03-28 00:59:51.451032 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-28 00:59:51.451038 | orchestrator | 2026-03-28 00:59:51.451044 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-03-28 00:59:51.451050 | orchestrator | Saturday 28 March 2026 00:59:20 +0000 (0:00:01.173) 0:00:11.181 ******** 2026-03-28 00:59:51.451056 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-03-28 00:59:51.451063 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-28 00:59:51.451069 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-28 00:59:51.451075 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-03-28 00:59:51.451082 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-28 00:59:51.451088 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-03-28 00:59:51.451093 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-03-28 00:59:51.451100 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-03-28 00:59:51.451105 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-03-28 00:59:51.451109 | orchestrator | 2026-03-28 00:59:51.451113 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-03-28 00:59:51.451116 | orchestrator | Saturday 28 March 2026 00:59:36 +0000 (0:00:15.794) 0:00:26.976 ******** 2026-03-28 00:59:51.451120 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-03-28 00:59:51.451127 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-03-28 00:59:51.451132 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-28 00:59:51.451138 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-28 00:59:51.451157 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-28 00:59:51.451164 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-28 00:59:51.451170 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-03-28 00:59:51.451176 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-03-28 00:59:51.451182 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-03-28 00:59:51.451188 | orchestrator | 2026-03-28 00:59:51.451227 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-03-28 00:59:51.451232 | orchestrator | Saturday 28 March 2026 00:59:40 +0000 (0:00:03.483) 0:00:30.459 ******** 2026-03-28 00:59:51.451236 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-03-28 00:59:51.451240 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-28 00:59:51.451246 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-28 00:59:51.451252 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-03-28 00:59:51.451258 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-28 00:59:51.451265 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-03-28 00:59:51.451271 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-03-28 00:59:51.451276 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-03-28 00:59:51.451282 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-03-28 00:59:51.451288 | orchestrator | 2026-03-28 00:59:51.451294 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:59:51.451299 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 00:59:51.451304 | orchestrator | 2026-03-28 00:59:51.451310 | orchestrator | 2026-03-28 00:59:51.451315 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:59:51.451322 | orchestrator | Saturday 28 March 2026 00:59:47 +0000 (0:00:07.085) 0:00:37.545 ******** 2026-03-28 00:59:51.451328 | orchestrator | =============================================================================== 2026-03-28 00:59:51.451334 | orchestrator | Write ceph keys to the share directory --------------------------------- 15.80s 2026-03-28 00:59:51.451340 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.09s 2026-03-28 00:59:51.451346 | orchestrator | Check if ceph keys exist ------------------------------------------------ 5.24s 2026-03-28 00:59:51.451356 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.51s 2026-03-28 00:59:51.451360 | orchestrator | Check if target directories exist --------------------------------------- 3.48s 2026-03-28 00:59:51.451367 | orchestrator | Create share directory -------------------------------------------------- 1.17s 2026-03-28 00:59:51.451374 | orchestrator | 2026-03-28 00:59:51.451381 | orchestrator | 2026-03-28 00:59:51.451388 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-03-28 00:59:51.451395 | orchestrator | 2026-03-28 00:59:51.451401 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-03-28 00:59:51.451407 | orchestrator | Saturday 28 March 2026 00:56:15 +0000 (0:00:00.108) 0:00:00.108 ******** 2026-03-28 00:59:51.451413 | orchestrator | ok: [localhost] => { 2026-03-28 00:59:51.451419 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-03-28 00:59:51.451424 | orchestrator | } 2026-03-28 00:59:51.451428 | orchestrator | 2026-03-28 00:59:51.451433 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-03-28 00:59:51.451437 | orchestrator | Saturday 28 March 2026 00:56:15 +0000 (0:00:00.053) 0:00:00.162 ******** 2026-03-28 00:59:51.451441 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-03-28 00:59:51.451446 | orchestrator | ...ignoring 2026-03-28 00:59:51.451450 | orchestrator | 2026-03-28 00:59:51.451454 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-03-28 00:59:51.451459 | orchestrator | Saturday 28 March 2026 00:56:18 +0000 (0:00:03.109) 0:00:03.271 ******** 2026-03-28 00:59:51.451464 | orchestrator | skipping: [localhost] 2026-03-28 00:59:51.451469 | orchestrator | 2026-03-28 00:59:51.451473 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-03-28 00:59:51.451484 | orchestrator | Saturday 28 March 2026 00:56:18 +0000 (0:00:00.051) 0:00:03.323 ******** 2026-03-28 00:59:51.451491 | orchestrator | ok: [localhost] 2026-03-28 00:59:51.451497 | orchestrator | 2026-03-28 00:59:51.451504 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 00:59:51.451511 | orchestrator | 2026-03-28 00:59:51.451517 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 00:59:51.451523 | orchestrator | Saturday 28 March 2026 00:56:19 +0000 (0:00:00.249) 0:00:03.572 ******** 2026-03-28 00:59:51.451897 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:59:51.451909 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:59:51.451913 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:59:51.451917 | orchestrator | 2026-03-28 00:59:51.451921 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 00:59:51.451925 | orchestrator | Saturday 28 March 2026 00:56:19 +0000 (0:00:00.336) 0:00:03.909 ******** 2026-03-28 00:59:51.451929 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-03-28 00:59:51.451933 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-03-28 00:59:51.451954 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-03-28 00:59:51.451959 | orchestrator | 2026-03-28 00:59:51.451963 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-03-28 00:59:51.451967 | orchestrator | 2026-03-28 00:59:51.451971 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-03-28 00:59:51.451975 | orchestrator | Saturday 28 March 2026 00:56:19 +0000 (0:00:00.403) 0:00:04.312 ******** 2026-03-28 00:59:51.451979 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-28 00:59:51.451983 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-28 00:59:51.451987 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-28 00:59:51.451991 | orchestrator | 2026-03-28 00:59:51.451994 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-28 00:59:51.451998 | orchestrator | Saturday 28 March 2026 00:56:20 +0000 (0:00:00.391) 0:00:04.703 ******** 2026-03-28 00:59:51.452002 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:59:51.452006 | orchestrator | 2026-03-28 00:59:51.452010 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-03-28 00:59:51.452013 | orchestrator | Saturday 28 March 2026 00:56:21 +0000 (0:00:00.805) 0:00:05.509 ******** 2026-03-28 00:59:51.452026 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-28 00:59:51.452052 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-28 00:59:51.452061 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-28 00:59:51.452066 | orchestrator | 2026-03-28 00:59:51.452070 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-03-28 00:59:51.452078 | orchestrator | Saturday 28 March 2026 00:56:25 +0000 (0:00:03.957) 0:00:09.466 ******** 2026-03-28 00:59:51.452082 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:59:51.452086 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:59:51.452090 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:59:51.452094 | orchestrator | 2026-03-28 00:59:51.452098 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-03-28 00:59:51.452101 | orchestrator | Saturday 28 March 2026 00:56:25 +0000 (0:00:00.869) 0:00:10.335 ******** 2026-03-28 00:59:51.452105 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:59:51.452109 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:59:51.452112 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:59:51.452117 | orchestrator | 2026-03-28 00:59:51.452120 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-03-28 00:59:51.452124 | orchestrator | Saturday 28 March 2026 00:56:27 +0000 (0:00:01.680) 0:00:12.016 ******** 2026-03-28 00:59:51.452132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-28 00:59:51.452139 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-28 00:59:51.452152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-28 00:59:51.452156 | orchestrator | 2026-03-28 00:59:51.452160 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-03-28 00:59:51.452164 | orchestrator | Saturday 28 March 2026 00:56:31 +0000 (0:00:04.217) 0:00:16.233 ******** 2026-03-28 00:59:51.452168 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:59:51.452171 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:59:51.452175 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:59:51.452179 | orchestrator | 2026-03-28 00:59:51.452182 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-03-28 00:59:51.452186 | orchestrator | Saturday 28 March 2026 00:56:32 +0000 (0:00:01.126) 0:00:17.359 ******** 2026-03-28 00:59:51.452190 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:59:51.452247 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:59:51.452252 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:59:51.452256 | orchestrator | 2026-03-28 00:59:51.452260 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-28 00:59:51.452264 | orchestrator | Saturday 28 March 2026 00:56:37 +0000 (0:00:04.375) 0:00:21.735 ******** 2026-03-28 00:59:51.452268 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:59:51.452272 | orchestrator | 2026-03-28 00:59:51.452275 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-28 00:59:51.452279 | orchestrator | Saturday 28 March 2026 00:56:38 +0000 (0:00:00.825) 0:00:22.561 ******** 2026-03-28 00:59:51.452286 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 00:59:51.452294 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:59:51.452303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 00:59:51.452308 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:59:51.452314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 00:59:51.452326 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:59:51.452329 | orchestrator | 2026-03-28 00:59:51.452333 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-28 00:59:51.452337 | orchestrator | Saturday 28 March 2026 00:56:41 +0000 (0:00:03.649) 0:00:26.210 ******** 2026-03-28 00:59:51.452344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 00:59:51.452348 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:59:51.452356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 00:59:51.452364 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:59:51.452370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 00:59:51.452374 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:59:51.452378 | orchestrator | 2026-03-28 00:59:51.452382 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-28 00:59:51.452385 | orchestrator | Saturday 28 March 2026 00:56:44 +0000 (0:00:02.177) 0:00:28.388 ******** 2026-03-28 00:59:51.452392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 00:59:51.452400 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:59:51.452404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 00:59:51.452408 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:59:51.452415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 00:59:51.452423 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:59:51.452427 | orchestrator | 2026-03-28 00:59:51.452430 | orchestrator | TASK [service-check-containers : mariadb | Check containers] ******************* 2026-03-28 00:59:51.452434 | orchestrator | Saturday 28 March 2026 00:56:46 +0000 (0:00:02.601) 0:00:30.990 ******** 2026-03-28 00:59:51.452441 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-28 00:59:51.452449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-28 00:59:51.452459 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-28 00:59:51.452464 | orchestrator | 2026-03-28 00:59:51.452468 | orchestrator | TASK [service-check-containers : mariadb | Notify handlers to restart containers] *** 2026-03-28 00:59:51.452471 | orchestrator | Saturday 28 March 2026 00:56:49 +0000 (0:00:02.889) 0:00:33.879 ******** 2026-03-28 00:59:51.452475 | orchestrator | changed: [testbed-node-0] => { 2026-03-28 00:59:51.452479 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 00:59:51.452483 | orchestrator | } 2026-03-28 00:59:51.452487 | orchestrator | changed: [testbed-node-1] => { 2026-03-28 00:59:51.452490 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 00:59:51.452494 | orchestrator | } 2026-03-28 00:59:51.452498 | orchestrator | changed: [testbed-node-2] => { 2026-03-28 00:59:51.452502 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 00:59:51.452505 | orchestrator | } 2026-03-28 00:59:51.452510 | orchestrator | 2026-03-28 00:59:51.452513 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-28 00:59:51.452517 | orchestrator | Saturday 28 March 2026 00:56:49 +0000 (0:00:00.342) 0:00:34.222 ******** 2026-03-28 00:59:51.452524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 00:59:51.452532 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:59:51.452539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 00:59:51.452543 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:59:51.452550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 00:59:51.452557 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:59:51.452561 | orchestrator | 2026-03-28 00:59:51.452568 | orchestrator | TASK [mariadb : Checking for mariadb cluster] ********************************** 2026-03-28 00:59:51.452574 | orchestrator | Saturday 28 March 2026 00:56:52 +0000 (0:00:03.025) 0:00:37.247 ******** 2026-03-28 00:59:51.452580 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:59:51.452586 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:59:51.452592 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:59:51.452599 | orchestrator | 2026-03-28 00:59:51.452605 | orchestrator | TASK [mariadb : Cleaning up temp file on localhost] **************************** 2026-03-28 00:59:51.452611 | orchestrator | Saturday 28 March 2026 00:56:53 +0000 (0:00:00.411) 0:00:37.659 ******** 2026-03-28 00:59:51.452617 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:59:51.452623 | orchestrator | 2026-03-28 00:59:51.452629 | orchestrator | TASK [mariadb : Stop MariaDB containers] *************************************** 2026-03-28 00:59:51.452636 | orchestrator | Saturday 28 March 2026 00:56:53 +0000 (0:00:00.120) 0:00:37.779 ******** 2026-03-28 00:59:51.452643 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:59:51.452653 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:59:51.452660 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:59:51.452666 | orchestrator | 2026-03-28 00:59:51.452673 | orchestrator | TASK [mariadb : Run MariaDB wsrep recovery] ************************************ 2026-03-28 00:59:51.452680 | orchestrator | Saturday 28 March 2026 00:56:53 +0000 (0:00:00.331) 0:00:38.111 ******** 2026-03-28 00:59:51.452686 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:59:51.452694 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:59:51.452698 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:59:51.452702 | orchestrator | 2026-03-28 00:59:51.452706 | orchestrator | TASK [mariadb : Copying MariaDB log file to /tmp] ****************************** 2026-03-28 00:59:51.452709 | orchestrator | Saturday 28 March 2026 00:56:54 +0000 (0:00:00.390) 0:00:38.502 ******** 2026-03-28 00:59:51.452713 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:59:51.452717 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:59:51.452720 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:59:51.452724 | orchestrator | 2026-03-28 00:59:51.452728 | orchestrator | TASK [mariadb : Get MariaDB wsrep recovery seqno] ****************************** 2026-03-28 00:59:51.452732 | orchestrator | Saturday 28 March 2026 00:56:54 +0000 (0:00:00.380) 0:00:38.882 ******** 2026-03-28 00:59:51.452735 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:59:51.452739 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:59:51.452747 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:59:51.452750 | orchestrator | 2026-03-28 00:59:51.452754 | orchestrator | TASK [mariadb : Removing MariaDB log file from /tmp] *************************** 2026-03-28 00:59:51.452758 | orchestrator | Saturday 28 March 2026 00:56:54 +0000 (0:00:00.472) 0:00:39.355 ******** 2026-03-28 00:59:51.452761 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:59:51.452765 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:59:51.452769 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:59:51.452772 | orchestrator | 2026-03-28 00:59:51.452776 | orchestrator | TASK [mariadb : Registering MariaDB seqno variable] **************************** 2026-03-28 00:59:51.452780 | orchestrator | Saturday 28 March 2026 00:56:55 +0000 (0:00:00.281) 0:00:39.637 ******** 2026-03-28 00:59:51.452783 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:59:51.452787 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:59:51.452791 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:59:51.452794 | orchestrator | 2026-03-28 00:59:51.452798 | orchestrator | TASK [mariadb : Comparing seqno value on all mariadb hosts] ******************** 2026-03-28 00:59:51.452802 | orchestrator | Saturday 28 March 2026 00:56:55 +0000 (0:00:00.291) 0:00:39.929 ******** 2026-03-28 00:59:51.452805 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-28 00:59:51.452809 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-28 00:59:51.452813 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-28 00:59:51.452816 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:59:51.452820 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-28 00:59:51.452824 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-28 00:59:51.452827 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-28 00:59:51.452831 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:59:51.452839 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-28 00:59:51.452842 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-28 00:59:51.452846 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-28 00:59:51.452850 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:59:51.452854 | orchestrator | 2026-03-28 00:59:51.452857 | orchestrator | TASK [mariadb : Writing hostname of host with the largest seqno to temp file] *** 2026-03-28 00:59:51.452861 | orchestrator | Saturday 28 March 2026 00:56:55 +0000 (0:00:00.364) 0:00:40.293 ******** 2026-03-28 00:59:51.452865 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:59:51.452868 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:59:51.452872 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:59:51.452876 | orchestrator | 2026-03-28 00:59:51.452880 | orchestrator | TASK [mariadb : Registering mariadb_recover_inventory_name from temp file] ***** 2026-03-28 00:59:51.452883 | orchestrator | Saturday 28 March 2026 00:56:56 +0000 (0:00:00.480) 0:00:40.774 ******** 2026-03-28 00:59:51.452887 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:59:51.452890 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:59:51.452894 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:59:51.452898 | orchestrator | 2026-03-28 00:59:51.452902 | orchestrator | TASK [mariadb : Store bootstrap and master hostnames into facts] *************** 2026-03-28 00:59:51.452905 | orchestrator | Saturday 28 March 2026 00:56:56 +0000 (0:00:00.328) 0:00:41.102 ******** 2026-03-28 00:59:51.452909 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:59:51.452913 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:59:51.452917 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:59:51.452920 | orchestrator | 2026-03-28 00:59:51.452924 | orchestrator | TASK [mariadb : Set grastate.dat file from MariaDB container in bootstrap host] *** 2026-03-28 00:59:51.452928 | orchestrator | Saturday 28 March 2026 00:56:57 +0000 (0:00:00.375) 0:00:41.478 ******** 2026-03-28 00:59:51.452932 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:59:51.452936 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:59:51.452939 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:59:51.452947 | orchestrator | 2026-03-28 00:59:51.452951 | orchestrator | TASK [mariadb : Starting first MariaDB container] ****************************** 2026-03-28 00:59:51.452954 | orchestrator | Saturday 28 March 2026 00:56:57 +0000 (0:00:00.328) 0:00:41.807 ******** 2026-03-28 00:59:51.452958 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:59:51.452962 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:59:51.452965 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:59:51.452969 | orchestrator | 2026-03-28 00:59:51.452973 | orchestrator | TASK [mariadb : Wait for first MariaDB container] ****************************** 2026-03-28 00:59:51.452976 | orchestrator | Saturday 28 March 2026 00:56:57 +0000 (0:00:00.521) 0:00:42.329 ******** 2026-03-28 00:59:51.452980 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:59:51.452984 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:59:51.452987 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:59:51.452991 | orchestrator | 2026-03-28 00:59:51.452995 | orchestrator | TASK [mariadb : Set first MariaDB container as primary] ************************ 2026-03-28 00:59:51.453003 | orchestrator | Saturday 28 March 2026 00:56:58 +0000 (0:00:00.384) 0:00:42.713 ******** 2026-03-28 00:59:51.453007 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:59:51.453011 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:59:51.453015 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:59:51.453018 | orchestrator | 2026-03-28 00:59:51.453022 | orchestrator | TASK [mariadb : Wait for MariaDB to become operational] ************************ 2026-03-28 00:59:51.453026 | orchestrator | Saturday 28 March 2026 00:56:58 +0000 (0:00:00.299) 0:00:43.013 ******** 2026-03-28 00:59:51.453030 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:59:51.453034 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:59:51.453037 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:59:51.453041 | orchestrator | 2026-03-28 00:59:51.453045 | orchestrator | TASK [mariadb : Restart slave MariaDB container(s)] **************************** 2026-03-28 00:59:51.453049 | orchestrator | Saturday 28 March 2026 00:56:58 +0000 (0:00:00.280) 0:00:43.293 ******** 2026-03-28 00:59:51.453057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 00:59:51.453062 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:59:51.453069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 00:59:51.453076 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:59:51.453080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 00:59:51.453085 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:59:51.453089 | orchestrator | 2026-03-28 00:59:51.453095 | orchestrator | TASK [mariadb : Wait for slave MariaDB] **************************************** 2026-03-28 00:59:51.453098 | orchestrator | Saturday 28 March 2026 00:57:01 +0000 (0:00:02.731) 0:00:46.024 ******** 2026-03-28 00:59:51.453102 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:59:51.453106 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:59:51.453110 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:59:51.453236 | orchestrator | 2026-03-28 00:59:51.453245 | orchestrator | TASK [mariadb : Restart master MariaDB container(s)] *************************** 2026-03-28 00:59:51.453252 | orchestrator | Saturday 28 March 2026 00:57:01 +0000 (0:00:00.333) 0:00:46.357 ******** 2026-03-28 00:59:51.453259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 00:59:51.453266 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:59:51.453273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 00:59:51.453283 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:59:51.453375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-28 00:59:51.453391 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:59:51.453395 | orchestrator | 2026-03-28 00:59:51.453399 | orchestrator | TASK [mariadb : Wait for master mariadb] *************************************** 2026-03-28 00:59:51.453406 | orchestrator | Saturday 28 March 2026 00:57:04 +0000 (0:00:02.943) 0:00:49.301 ******** 2026-03-28 00:59:51.453410 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:59:51.453414 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:59:51.453417 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:59:51.453421 | orchestrator | 2026-03-28 00:59:51.453425 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-03-28 00:59:51.453429 | orchestrator | Saturday 28 March 2026 00:57:05 +0000 (0:00:00.569) 0:00:49.871 ******** 2026-03-28 00:59:51.453433 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:59:51.453436 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:59:51.453440 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:59:51.453444 | orchestrator | 2026-03-28 00:59:51.453447 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-03-28 00:59:51.453451 | orchestrator | Saturday 28 March 2026 00:57:06 +0000 (0:00:00.791) 0:00:50.663 ******** 2026-03-28 00:59:51.453455 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:59:51.453459 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:59:51.453462 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:59:51.453466 | orchestrator | 2026-03-28 00:59:51.453470 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-03-28 00:59:51.453473 | orchestrator | Saturday 28 March 2026 00:57:06 +0000 (0:00:00.516) 0:00:51.180 ******** 2026-03-28 00:59:51.453477 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:59:51.453481 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:59:51.453484 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:59:51.453488 | orchestrator | 2026-03-28 00:59:51.453492 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-03-28 00:59:51.453496 | orchestrator | Saturday 28 March 2026 00:57:07 +0000 (0:00:00.951) 0:00:52.132 ******** 2026-03-28 00:59:51.453500 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:59:51.453503 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:59:51.453511 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:59:51.453515 | orchestrator | 2026-03-28 00:59:51.453519 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-03-28 00:59:51.453523 | orchestrator | Saturday 28 March 2026 00:57:08 +0000 (0:00:00.663) 0:00:52.795 ******** 2026-03-28 00:59:51.453526 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:59:51.453530 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:59:51.453534 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:59:51.453538 | orchestrator | 2026-03-28 00:59:51.453541 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-03-28 00:59:51.453545 | orchestrator | Saturday 28 March 2026 00:57:09 +0000 (0:00:00.829) 0:00:53.625 ******** 2026-03-28 00:59:51.453549 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:59:51.453553 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:59:51.453557 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:59:51.453560 | orchestrator | 2026-03-28 00:59:51.453564 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-03-28 00:59:51.453568 | orchestrator | Saturday 28 March 2026 00:57:09 +0000 (0:00:00.326) 0:00:53.951 ******** 2026-03-28 00:59:51.453572 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:59:51.453575 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:59:51.453579 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:59:51.453583 | orchestrator | 2026-03-28 00:59:51.453591 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-03-28 00:59:51.453595 | orchestrator | Saturday 28 March 2026 00:57:09 +0000 (0:00:00.359) 0:00:54.310 ******** 2026-03-28 00:59:51.453599 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-03-28 00:59:51.453604 | orchestrator | ...ignoring 2026-03-28 00:59:51.453608 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-03-28 00:59:51.453611 | orchestrator | ...ignoring 2026-03-28 00:59:51.453615 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-03-28 00:59:51.453619 | orchestrator | ...ignoring 2026-03-28 00:59:51.453623 | orchestrator | 2026-03-28 00:59:51.453627 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-03-28 00:59:51.453631 | orchestrator | Saturday 28 March 2026 00:57:20 +0000 (0:00:10.855) 0:01:05.166 ******** 2026-03-28 00:59:51.453635 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:59:51.453638 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:59:51.453642 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:59:51.453646 | orchestrator | 2026-03-28 00:59:51.453650 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-03-28 00:59:51.453653 | orchestrator | Saturday 28 March 2026 00:57:21 +0000 (0:00:00.618) 0:01:05.784 ******** 2026-03-28 00:59:51.453657 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:59:51.453661 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:59:51.453665 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:59:51.453668 | orchestrator | 2026-03-28 00:59:51.453673 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-03-28 00:59:51.453676 | orchestrator | Saturday 28 March 2026 00:57:21 +0000 (0:00:00.321) 0:01:06.106 ******** 2026-03-28 00:59:51.453680 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:59:51.453684 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:59:51.453688 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:59:51.453691 | orchestrator | 2026-03-28 00:59:51.453695 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-03-28 00:59:51.453699 | orchestrator | Saturday 28 March 2026 00:57:22 +0000 (0:00:00.351) 0:01:06.458 ******** 2026-03-28 00:59:51.453703 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:59:51.453706 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:59:51.453714 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:59:51.453718 | orchestrator | 2026-03-28 00:59:51.453721 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-03-28 00:59:51.453725 | orchestrator | Saturday 28 March 2026 00:57:22 +0000 (0:00:00.354) 0:01:06.812 ******** 2026-03-28 00:59:51.453729 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:59:51.453732 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:59:51.453736 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:59:51.453740 | orchestrator | 2026-03-28 00:59:51.453746 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-03-28 00:59:51.453750 | orchestrator | Saturday 28 March 2026 00:57:23 +0000 (0:00:00.613) 0:01:07.425 ******** 2026-03-28 00:59:51.453754 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:59:51.453758 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:59:51.453761 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:59:51.453765 | orchestrator | 2026-03-28 00:59:51.453769 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-28 00:59:51.453773 | orchestrator | Saturday 28 March 2026 00:57:23 +0000 (0:00:00.374) 0:01:07.800 ******** 2026-03-28 00:59:51.453776 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:59:51.453780 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:59:51.453784 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-03-28 00:59:51.453788 | orchestrator | 2026-03-28 00:59:51.453792 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-03-28 00:59:51.453795 | orchestrator | Saturday 28 March 2026 00:57:23 +0000 (0:00:00.452) 0:01:08.252 ******** 2026-03-28 00:59:51.453799 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:59:51.453803 | orchestrator | 2026-03-28 00:59:51.453806 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-03-28 00:59:51.453810 | orchestrator | Saturday 28 March 2026 00:57:34 +0000 (0:00:10.951) 0:01:19.204 ******** 2026-03-28 00:59:51.453859 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:59:51.453864 | orchestrator | 2026-03-28 00:59:51.453869 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-28 00:59:51.453873 | orchestrator | Saturday 28 March 2026 00:57:34 +0000 (0:00:00.140) 0:01:19.345 ******** 2026-03-28 00:59:51.453877 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:59:51.453882 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:59:51.453886 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:59:51.453890 | orchestrator | 2026-03-28 00:59:51.453894 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-03-28 00:59:51.453898 | orchestrator | Saturday 28 March 2026 00:57:36 +0000 (0:00:01.067) 0:01:20.413 ******** 2026-03-28 00:59:51.453903 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:59:51.453907 | orchestrator | 2026-03-28 00:59:51.453911 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-03-28 00:59:51.453915 | orchestrator | Saturday 28 March 2026 00:57:44 +0000 (0:00:08.311) 0:01:28.724 ******** 2026-03-28 00:59:51.453919 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:59:51.453924 | orchestrator | 2026-03-28 00:59:51.453928 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-03-28 00:59:51.453932 | orchestrator | Saturday 28 March 2026 00:57:46 +0000 (0:00:01.716) 0:01:30.440 ******** 2026-03-28 00:59:51.453936 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:59:51.453940 | orchestrator | 2026-03-28 00:59:51.453945 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-03-28 00:59:51.453949 | orchestrator | Saturday 28 March 2026 00:57:48 +0000 (0:00:02.136) 0:01:32.577 ******** 2026-03-28 00:59:51.453958 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:59:51.453963 | orchestrator | 2026-03-28 00:59:51.453967 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-03-28 00:59:51.453972 | orchestrator | Saturday 28 March 2026 00:57:48 +0000 (0:00:00.124) 0:01:32.701 ******** 2026-03-28 00:59:51.453976 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:59:51.453985 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:59:51.453990 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:59:51.453994 | orchestrator | 2026-03-28 00:59:51.453998 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-03-28 00:59:51.454002 | orchestrator | Saturday 28 March 2026 00:57:48 +0000 (0:00:00.573) 0:01:33.275 ******** 2026-03-28 00:59:51.454007 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:59:51.454011 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:59:51.454058 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:59:51.454064 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-03-28 00:59:51.454068 | orchestrator | 2026-03-28 00:59:51.454072 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-28 00:59:51.454076 | orchestrator | skipping: no hosts matched 2026-03-28 00:59:51.454081 | orchestrator | 2026-03-28 00:59:51.454085 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-28 00:59:51.454090 | orchestrator | 2026-03-28 00:59:51.454094 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-28 00:59:51.454099 | orchestrator | Saturday 28 March 2026 00:57:49 +0000 (0:00:00.348) 0:01:33.623 ******** 2026-03-28 00:59:51.454103 | orchestrator | changed: [testbed-node-1] 2026-03-28 00:59:51.454108 | orchestrator | 2026-03-28 00:59:51.454113 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-28 00:59:51.454117 | orchestrator | Saturday 28 March 2026 00:58:10 +0000 (0:00:20.972) 0:01:54.596 ******** 2026-03-28 00:59:51.454121 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:59:51.454125 | orchestrator | 2026-03-28 00:59:51.454129 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-28 00:59:51.454133 | orchestrator | Saturday 28 March 2026 00:58:25 +0000 (0:00:15.661) 0:02:10.258 ******** 2026-03-28 00:59:51.454137 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:59:51.454140 | orchestrator | 2026-03-28 00:59:51.454144 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-28 00:59:51.454148 | orchestrator | 2026-03-28 00:59:51.454152 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-28 00:59:51.454155 | orchestrator | Saturday 28 March 2026 00:58:28 +0000 (0:00:02.494) 0:02:12.752 ******** 2026-03-28 00:59:51.454159 | orchestrator | changed: [testbed-node-2] 2026-03-28 00:59:51.454163 | orchestrator | 2026-03-28 00:59:51.454167 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-28 00:59:51.454171 | orchestrator | Saturday 28 March 2026 00:58:52 +0000 (0:00:24.279) 0:02:37.031 ******** 2026-03-28 00:59:51.454174 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:59:51.454178 | orchestrator | 2026-03-28 00:59:51.454182 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-28 00:59:51.454189 | orchestrator | Saturday 28 March 2026 00:59:04 +0000 (0:00:11.566) 0:02:48.598 ******** 2026-03-28 00:59:51.454230 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:59:51.454236 | orchestrator | 2026-03-28 00:59:51.454240 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-03-28 00:59:51.454246 | orchestrator | 2026-03-28 00:59:51.454253 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-28 00:59:51.454260 | orchestrator | Saturday 28 March 2026 00:59:06 +0000 (0:00:02.599) 0:02:51.198 ******** 2026-03-28 00:59:51.454266 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:59:51.454273 | orchestrator | 2026-03-28 00:59:51.454279 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-28 00:59:51.454286 | orchestrator | Saturday 28 March 2026 00:59:19 +0000 (0:00:12.858) 0:03:04.056 ******** 2026-03-28 00:59:51.454293 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:59:51.454299 | orchestrator | 2026-03-28 00:59:51.454305 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-28 00:59:51.454313 | orchestrator | Saturday 28 March 2026 00:59:24 +0000 (0:00:04.656) 0:03:08.713 ******** 2026-03-28 00:59:51.454325 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:59:51.454332 | orchestrator | 2026-03-28 00:59:51.454338 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-03-28 00:59:51.454344 | orchestrator | 2026-03-28 00:59:51.454351 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-03-28 00:59:51.454357 | orchestrator | Saturday 28 March 2026 00:59:26 +0000 (0:00:02.581) 0:03:11.294 ******** 2026-03-28 00:59:51.454364 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 00:59:51.454371 | orchestrator | 2026-03-28 00:59:51.454378 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-03-28 00:59:51.454385 | orchestrator | Saturday 28 March 2026 00:59:27 +0000 (0:00:00.571) 0:03:11.866 ******** 2026-03-28 00:59:51.454390 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:59:51.454393 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:59:51.454397 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:59:51.454401 | orchestrator | 2026-03-28 00:59:51.454405 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-03-28 00:59:51.454408 | orchestrator | Saturday 28 March 2026 00:59:30 +0000 (0:00:02.665) 0:03:14.531 ******** 2026-03-28 00:59:51.454412 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:59:51.454416 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:59:51.454419 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:59:51.454423 | orchestrator | 2026-03-28 00:59:51.454427 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-03-28 00:59:51.454431 | orchestrator | Saturday 28 March 2026 00:59:32 +0000 (0:00:02.528) 0:03:17.059 ******** 2026-03-28 00:59:51.454434 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:59:51.454438 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:59:51.454442 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:59:51.454445 | orchestrator | 2026-03-28 00:59:51.454449 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-03-28 00:59:51.454459 | orchestrator | Saturday 28 March 2026 00:59:35 +0000 (0:00:02.516) 0:03:19.576 ******** 2026-03-28 00:59:51.454463 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:59:51.454467 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:59:51.454471 | orchestrator | changed: [testbed-node-0] 2026-03-28 00:59:51.454475 | orchestrator | 2026-03-28 00:59:51.454478 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-03-28 00:59:51.454482 | orchestrator | Saturday 28 March 2026 00:59:37 +0000 (0:00:02.439) 0:03:22.016 ******** 2026-03-28 00:59:51.454486 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:59:51.454490 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:59:51.454494 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:59:51.454497 | orchestrator | 2026-03-28 00:59:51.454501 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-03-28 00:59:51.454505 | orchestrator | Saturday 28 March 2026 00:59:42 +0000 (0:00:05.172) 0:03:27.188 ******** 2026-03-28 00:59:51.454509 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:59:51.454512 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:59:51.454516 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:59:51.454520 | orchestrator | 2026-03-28 00:59:51.454523 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-03-28 00:59:51.454527 | orchestrator | Saturday 28 March 2026 00:59:45 +0000 (0:00:02.254) 0:03:29.442 ******** 2026-03-28 00:59:51.454531 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:59:51.454534 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:59:51.454538 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:59:51.454542 | orchestrator | 2026-03-28 00:59:51.454546 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-03-28 00:59:51.454549 | orchestrator | Saturday 28 March 2026 00:59:45 +0000 (0:00:00.555) 0:03:29.998 ******** 2026-03-28 00:59:51.454553 | orchestrator | ok: [testbed-node-1] 2026-03-28 00:59:51.454557 | orchestrator | ok: [testbed-node-2] 2026-03-28 00:59:51.454561 | orchestrator | ok: [testbed-node-0] 2026-03-28 00:59:51.454568 | orchestrator | 2026-03-28 00:59:51.454572 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-03-28 00:59:51.454576 | orchestrator | Saturday 28 March 2026 00:59:48 +0000 (0:00:03.178) 0:03:33.177 ******** 2026-03-28 00:59:51.454579 | orchestrator | skipping: [testbed-node-0] 2026-03-28 00:59:51.454583 | orchestrator | skipping: [testbed-node-1] 2026-03-28 00:59:51.454587 | orchestrator | skipping: [testbed-node-2] 2026-03-28 00:59:51.454590 | orchestrator | 2026-03-28 00:59:51.454594 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 00:59:51.454598 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-28 00:59:51.454602 | orchestrator | testbed-node-0 : ok=36  changed=17  unreachable=0 failed=0 skipped=39  rescued=0 ignored=1  2026-03-28 00:59:51.454611 | orchestrator | testbed-node-1 : ok=22  changed=8  unreachable=0 failed=0 skipped=45  rescued=0 ignored=1  2026-03-28 00:59:51.454615 | orchestrator | testbed-node-2 : ok=22  changed=8  unreachable=0 failed=0 skipped=45  rescued=0 ignored=1  2026-03-28 00:59:51.454619 | orchestrator | 2026-03-28 00:59:51.454622 | orchestrator | 2026-03-28 00:59:51.454626 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 00:59:51.454630 | orchestrator | Saturday 28 March 2026 00:59:49 +0000 (0:00:00.220) 0:03:33.397 ******** 2026-03-28 00:59:51.454634 | orchestrator | =============================================================================== 2026-03-28 00:59:51.454637 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 45.25s 2026-03-28 00:59:51.454641 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 27.23s 2026-03-28 00:59:51.454645 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 12.86s 2026-03-28 00:59:51.454648 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.95s 2026-03-28 00:59:51.454652 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.86s 2026-03-28 00:59:51.454656 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 8.31s 2026-03-28 00:59:51.454659 | orchestrator | service-check : mariadb | Get container facts --------------------------- 5.17s 2026-03-28 00:59:51.454663 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.09s 2026-03-28 00:59:51.454667 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.66s 2026-03-28 00:59:51.454670 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.38s 2026-03-28 00:59:51.454674 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.22s 2026-03-28 00:59:51.454678 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.96s 2026-03-28 00:59:51.454681 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.65s 2026-03-28 00:59:51.454685 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.18s 2026-03-28 00:59:51.454689 | orchestrator | Check MariaDB service --------------------------------------------------- 3.11s 2026-03-28 00:59:51.454693 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.03s 2026-03-28 00:59:51.454696 | orchestrator | mariadb : Restart master MariaDB container(s) --------------------------- 2.94s 2026-03-28 00:59:51.454700 | orchestrator | service-check-containers : mariadb | Check containers ------------------- 2.89s 2026-03-28 00:59:51.454704 | orchestrator | mariadb : Restart slave MariaDB container(s) ---------------------------- 2.73s 2026-03-28 00:59:51.454707 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.67s 2026-03-28 00:59:51.454713 | orchestrator | 2026-03-28 00:59:51 | INFO  | Task 74963c00-14d1-406d-8997-406f44cd92a5 is in state STARTED 2026-03-28 00:59:51.455265 | orchestrator | 2026-03-28 00:59:51 | INFO  | Task 6d887c72-5d25-4b72-9537-81625c762a32 is in state STARTED 2026-03-28 00:59:51.457168 | orchestrator | 2026-03-28 00:59:51 | INFO  | Task 4d3b3ce2-a3d6-4a93-b81e-a1cb32536b3e is in state STARTED 2026-03-28 00:59:51.457248 | orchestrator | 2026-03-28 00:59:51 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:59:54.521777 | orchestrator | 2026-03-28 00:59:54 | INFO  | Task 74963c00-14d1-406d-8997-406f44cd92a5 is in state STARTED 2026-03-28 00:59:54.521894 | orchestrator | 2026-03-28 00:59:54 | INFO  | Task 6d887c72-5d25-4b72-9537-81625c762a32 is in state STARTED 2026-03-28 00:59:54.521913 | orchestrator | 2026-03-28 00:59:54 | INFO  | Task 4d3b3ce2-a3d6-4a93-b81e-a1cb32536b3e is in state STARTED 2026-03-28 00:59:54.521930 | orchestrator | 2026-03-28 00:59:54 | INFO  | Wait 1 second(s) until the next check 2026-03-28 00:59:57.584688 | orchestrator | 2026-03-28 00:59:57 | INFO  | Task 74963c00-14d1-406d-8997-406f44cd92a5 is in state STARTED 2026-03-28 00:59:57.586180 | orchestrator | 2026-03-28 00:59:57 | INFO  | Task 6d887c72-5d25-4b72-9537-81625c762a32 is in state STARTED 2026-03-28 00:59:57.587644 | orchestrator | 2026-03-28 00:59:57 | INFO  | Task 4d3b3ce2-a3d6-4a93-b81e-a1cb32536b3e is in state STARTED 2026-03-28 00:59:57.587686 | orchestrator | 2026-03-28 00:59:57 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:00:00.636812 | orchestrator | 2026-03-28 01:00:00 | INFO  | Task 74963c00-14d1-406d-8997-406f44cd92a5 is in state STARTED 2026-03-28 01:00:00.638989 | orchestrator | 2026-03-28 01:00:00 | INFO  | Task 6d887c72-5d25-4b72-9537-81625c762a32 is in state STARTED 2026-03-28 01:00:00.642977 | orchestrator | 2026-03-28 01:00:00 | INFO  | Task 4d3b3ce2-a3d6-4a93-b81e-a1cb32536b3e is in state STARTED 2026-03-28 01:00:00.643107 | orchestrator | 2026-03-28 01:00:00 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:00:03.687418 | orchestrator | 2026-03-28 01:00:03 | INFO  | Task 74963c00-14d1-406d-8997-406f44cd92a5 is in state STARTED 2026-03-28 01:00:03.689063 | orchestrator | 2026-03-28 01:00:03 | INFO  | Task 6d887c72-5d25-4b72-9537-81625c762a32 is in state STARTED 2026-03-28 01:00:03.692338 | orchestrator | 2026-03-28 01:00:03 | INFO  | Task 4d3b3ce2-a3d6-4a93-b81e-a1cb32536b3e is in state STARTED 2026-03-28 01:00:03.692419 | orchestrator | 2026-03-28 01:00:03 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:00:06.735987 | orchestrator | 2026-03-28 01:00:06 | INFO  | Task 74963c00-14d1-406d-8997-406f44cd92a5 is in state STARTED 2026-03-28 01:00:06.736170 | orchestrator | 2026-03-28 01:00:06 | INFO  | Task 6d887c72-5d25-4b72-9537-81625c762a32 is in state STARTED 2026-03-28 01:00:06.737331 | orchestrator | 2026-03-28 01:00:06 | INFO  | Task 4d3b3ce2-a3d6-4a93-b81e-a1cb32536b3e is in state STARTED 2026-03-28 01:00:06.737371 | orchestrator | 2026-03-28 01:00:06 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:00:09.778542 | orchestrator | 2026-03-28 01:00:09 | INFO  | Task 74963c00-14d1-406d-8997-406f44cd92a5 is in state STARTED 2026-03-28 01:00:09.783390 | orchestrator | 2026-03-28 01:00:09 | INFO  | Task 6d887c72-5d25-4b72-9537-81625c762a32 is in state STARTED 2026-03-28 01:00:09.783465 | orchestrator | 2026-03-28 01:00:09 | INFO  | Task 4d3b3ce2-a3d6-4a93-b81e-a1cb32536b3e is in state STARTED 2026-03-28 01:00:09.783475 | orchestrator | 2026-03-28 01:00:09 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:00:12.828702 | orchestrator | 2026-03-28 01:00:12 | INFO  | Task 74963c00-14d1-406d-8997-406f44cd92a5 is in state STARTED 2026-03-28 01:00:12.828812 | orchestrator | 2026-03-28 01:00:12 | INFO  | Task 6d887c72-5d25-4b72-9537-81625c762a32 is in state STARTED 2026-03-28 01:00:12.829783 | orchestrator | 2026-03-28 01:00:12 | INFO  | Task 4d3b3ce2-a3d6-4a93-b81e-a1cb32536b3e is in state STARTED 2026-03-28 01:00:12.829839 | orchestrator | 2026-03-28 01:00:12 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:00:15.877057 | orchestrator | 2026-03-28 01:00:15 | INFO  | Task 74963c00-14d1-406d-8997-406f44cd92a5 is in state STARTED 2026-03-28 01:00:15.877857 | orchestrator | 2026-03-28 01:00:15 | INFO  | Task 6d887c72-5d25-4b72-9537-81625c762a32 is in state STARTED 2026-03-28 01:00:15.878838 | orchestrator | 2026-03-28 01:00:15 | INFO  | Task 4d3b3ce2-a3d6-4a93-b81e-a1cb32536b3e is in state STARTED 2026-03-28 01:00:15.878867 | orchestrator | 2026-03-28 01:00:15 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:00:18.927693 | orchestrator | 2026-03-28 01:00:18 | INFO  | Task 74963c00-14d1-406d-8997-406f44cd92a5 is in state STARTED 2026-03-28 01:00:18.927953 | orchestrator | 2026-03-28 01:00:18 | INFO  | Task 6d887c72-5d25-4b72-9537-81625c762a32 is in state STARTED 2026-03-28 01:00:18.929138 | orchestrator | 2026-03-28 01:00:18 | INFO  | Task 4d3b3ce2-a3d6-4a93-b81e-a1cb32536b3e is in state STARTED 2026-03-28 01:00:18.929250 | orchestrator | 2026-03-28 01:00:18 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:00:21.978856 | orchestrator | 2026-03-28 01:00:21 | INFO  | Task 74963c00-14d1-406d-8997-406f44cd92a5 is in state STARTED 2026-03-28 01:00:21.981919 | orchestrator | 2026-03-28 01:00:21 | INFO  | Task 6d887c72-5d25-4b72-9537-81625c762a32 is in state STARTED 2026-03-28 01:00:21.985695 | orchestrator | 2026-03-28 01:00:21 | INFO  | Task 4d3b3ce2-a3d6-4a93-b81e-a1cb32536b3e is in state STARTED 2026-03-28 01:00:21.985762 | orchestrator | 2026-03-28 01:00:21 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:00:25.046803 | orchestrator | 2026-03-28 01:00:25 | INFO  | Task 74963c00-14d1-406d-8997-406f44cd92a5 is in state STARTED 2026-03-28 01:00:25.048721 | orchestrator | 2026-03-28 01:00:25 | INFO  | Task 6d887c72-5d25-4b72-9537-81625c762a32 is in state STARTED 2026-03-28 01:00:25.051436 | orchestrator | 2026-03-28 01:00:25 | INFO  | Task 4d3b3ce2-a3d6-4a93-b81e-a1cb32536b3e is in state STARTED 2026-03-28 01:00:25.051503 | orchestrator | 2026-03-28 01:00:25 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:00:28.097873 | orchestrator | 2026-03-28 01:00:28 | INFO  | Task 74963c00-14d1-406d-8997-406f44cd92a5 is in state STARTED 2026-03-28 01:00:28.099904 | orchestrator | 2026-03-28 01:00:28 | INFO  | Task 6d887c72-5d25-4b72-9537-81625c762a32 is in state STARTED 2026-03-28 01:00:28.102176 | orchestrator | 2026-03-28 01:00:28 | INFO  | Task 4d3b3ce2-a3d6-4a93-b81e-a1cb32536b3e is in state STARTED 2026-03-28 01:00:28.102219 | orchestrator | 2026-03-28 01:00:28 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:00:31.152418 | orchestrator | 2026-03-28 01:00:31 | INFO  | Task 74963c00-14d1-406d-8997-406f44cd92a5 is in state STARTED 2026-03-28 01:00:31.152988 | orchestrator | 2026-03-28 01:00:31 | INFO  | Task 6d887c72-5d25-4b72-9537-81625c762a32 is in state STARTED 2026-03-28 01:00:31.154767 | orchestrator | 2026-03-28 01:00:31 | INFO  | Task 4d3b3ce2-a3d6-4a93-b81e-a1cb32536b3e is in state STARTED 2026-03-28 01:00:31.154803 | orchestrator | 2026-03-28 01:00:31 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:00:34.207373 | orchestrator | 2026-03-28 01:00:34 | INFO  | Task 74963c00-14d1-406d-8997-406f44cd92a5 is in state STARTED 2026-03-28 01:00:34.208353 | orchestrator | 2026-03-28 01:00:34 | INFO  | Task 6d887c72-5d25-4b72-9537-81625c762a32 is in state STARTED 2026-03-28 01:00:34.211160 | orchestrator | 2026-03-28 01:00:34 | INFO  | Task 4d3b3ce2-a3d6-4a93-b81e-a1cb32536b3e is in state STARTED 2026-03-28 01:00:34.211186 | orchestrator | 2026-03-28 01:00:34 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:00:37.258235 | orchestrator | 2026-03-28 01:00:37 | INFO  | Task 74963c00-14d1-406d-8997-406f44cd92a5 is in state STARTED 2026-03-28 01:00:37.258776 | orchestrator | 2026-03-28 01:00:37 | INFO  | Task 6d887c72-5d25-4b72-9537-81625c762a32 is in state STARTED 2026-03-28 01:00:37.263234 | orchestrator | 2026-03-28 01:00:37 | INFO  | Task 4d3b3ce2-a3d6-4a93-b81e-a1cb32536b3e is in state STARTED 2026-03-28 01:00:37.263389 | orchestrator | 2026-03-28 01:00:37 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:00:40.300942 | orchestrator | 2026-03-28 01:00:40 | INFO  | Task 74963c00-14d1-406d-8997-406f44cd92a5 is in state STARTED 2026-03-28 01:00:40.301795 | orchestrator | 2026-03-28 01:00:40 | INFO  | Task 6d887c72-5d25-4b72-9537-81625c762a32 is in state STARTED 2026-03-28 01:00:40.302839 | orchestrator | 2026-03-28 01:00:40 | INFO  | Task 4d3b3ce2-a3d6-4a93-b81e-a1cb32536b3e is in state STARTED 2026-03-28 01:00:40.302894 | orchestrator | 2026-03-28 01:00:40 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:00:43.354467 | orchestrator | 2026-03-28 01:00:43 | INFO  | Task 74963c00-14d1-406d-8997-406f44cd92a5 is in state STARTED 2026-03-28 01:00:43.356371 | orchestrator | 2026-03-28 01:00:43 | INFO  | Task 6d887c72-5d25-4b72-9537-81625c762a32 is in state STARTED 2026-03-28 01:00:43.359841 | orchestrator | 2026-03-28 01:00:43 | INFO  | Task 4d3b3ce2-a3d6-4a93-b81e-a1cb32536b3e is in state STARTED 2026-03-28 01:00:43.360879 | orchestrator | 2026-03-28 01:00:43 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:00:46.404715 | orchestrator | 2026-03-28 01:00:46 | INFO  | Task 9a7ff00d-5862-4680-b4ff-b2aa3982eb12 is in state STARTED 2026-03-28 01:00:46.406086 | orchestrator | 2026-03-28 01:00:46 | INFO  | Task 74963c00-14d1-406d-8997-406f44cd92a5 is in state SUCCESS 2026-03-28 01:00:46.406484 | orchestrator | 2026-03-28 01:00:46 | INFO  | Task 6d887c72-5d25-4b72-9537-81625c762a32 is in state STARTED 2026-03-28 01:00:46.407855 | orchestrator | 2026-03-28 01:00:46 | INFO  | Task 4fe688db-fb3f-439f-a5cd-1cda7ff0064e is in state STARTED 2026-03-28 01:00:46.409333 | orchestrator | 2026-03-28 01:00:46 | INFO  | Task 4d3b3ce2-a3d6-4a93-b81e-a1cb32536b3e is in state STARTED 2026-03-28 01:00:46.410391 | orchestrator | 2026-03-28 01:00:46 | INFO  | Task 0e983d62-7caa-40e4-b401-090a83c48638 is in state STARTED 2026-03-28 01:00:46.410439 | orchestrator | 2026-03-28 01:00:46 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:00:49.457750 | orchestrator | 2026-03-28 01:00:49 | INFO  | Task 9a7ff00d-5862-4680-b4ff-b2aa3982eb12 is in state STARTED 2026-03-28 01:00:49.459013 | orchestrator | 2026-03-28 01:00:49 | INFO  | Task 6d887c72-5d25-4b72-9537-81625c762a32 is in state STARTED 2026-03-28 01:00:49.459252 | orchestrator | 2026-03-28 01:00:49 | INFO  | Task 4fe688db-fb3f-439f-a5cd-1cda7ff0064e is in state STARTED 2026-03-28 01:00:49.460442 | orchestrator | 2026-03-28 01:00:49 | INFO  | Task 4d3b3ce2-a3d6-4a93-b81e-a1cb32536b3e is in state STARTED 2026-03-28 01:00:49.461840 | orchestrator | 2026-03-28 01:00:49 | INFO  | Task 0e983d62-7caa-40e4-b401-090a83c48638 is in state STARTED 2026-03-28 01:00:49.461911 | orchestrator | 2026-03-28 01:00:49 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:00:52.510219 | orchestrator | 2026-03-28 01:00:52 | INFO  | Task 9a7ff00d-5862-4680-b4ff-b2aa3982eb12 is in state STARTED 2026-03-28 01:00:52.510792 | orchestrator | 2026-03-28 01:00:52 | INFO  | Task 6d887c72-5d25-4b72-9537-81625c762a32 is in state STARTED 2026-03-28 01:00:52.511713 | orchestrator | 2026-03-28 01:00:52 | INFO  | Task 4fe688db-fb3f-439f-a5cd-1cda7ff0064e is in state STARTED 2026-03-28 01:00:52.512580 | orchestrator | 2026-03-28 01:00:52 | INFO  | Task 4d3b3ce2-a3d6-4a93-b81e-a1cb32536b3e is in state STARTED 2026-03-28 01:00:52.513655 | orchestrator | 2026-03-28 01:00:52 | INFO  | Task 0e983d62-7caa-40e4-b401-090a83c48638 is in state STARTED 2026-03-28 01:00:52.513680 | orchestrator | 2026-03-28 01:00:52 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:00:55.587308 | orchestrator | 2026-03-28 01:00:55 | INFO  | Task 9a7ff00d-5862-4680-b4ff-b2aa3982eb12 is in state STARTED 2026-03-28 01:00:55.587929 | orchestrator | 2026-03-28 01:00:55 | INFO  | Task 6d887c72-5d25-4b72-9537-81625c762a32 is in state STARTED 2026-03-28 01:00:55.588895 | orchestrator | 2026-03-28 01:00:55 | INFO  | Task 4fe688db-fb3f-439f-a5cd-1cda7ff0064e is in state STARTED 2026-03-28 01:00:55.590249 | orchestrator | 2026-03-28 01:00:55 | INFO  | Task 4d3b3ce2-a3d6-4a93-b81e-a1cb32536b3e is in state STARTED 2026-03-28 01:00:55.591001 | orchestrator | 2026-03-28 01:00:55 | INFO  | Task 0e983d62-7caa-40e4-b401-090a83c48638 is in state STARTED 2026-03-28 01:00:55.591071 | orchestrator | 2026-03-28 01:00:55 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:00:58.638833 | orchestrator | 2026-03-28 01:00:58 | INFO  | Task 9a7ff00d-5862-4680-b4ff-b2aa3982eb12 is in state STARTED 2026-03-28 01:00:58.638912 | orchestrator | 2026-03-28 01:00:58 | INFO  | Task 6d887c72-5d25-4b72-9537-81625c762a32 is in state STARTED 2026-03-28 01:00:58.640466 | orchestrator | 2026-03-28 01:00:58 | INFO  | Task 4fe688db-fb3f-439f-a5cd-1cda7ff0064e is in state STARTED 2026-03-28 01:00:58.641496 | orchestrator | 2026-03-28 01:00:58 | INFO  | Task 4d3b3ce2-a3d6-4a93-b81e-a1cb32536b3e is in state STARTED 2026-03-28 01:00:58.642477 | orchestrator | 2026-03-28 01:00:58 | INFO  | Task 0e983d62-7caa-40e4-b401-090a83c48638 is in state STARTED 2026-03-28 01:00:58.642805 | orchestrator | 2026-03-28 01:00:58 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:01:01.722616 | orchestrator | 2026-03-28 01:01:01 | INFO  | Task 9a7ff00d-5862-4680-b4ff-b2aa3982eb12 is in state STARTED 2026-03-28 01:01:01.722909 | orchestrator | 2026-03-28 01:01:01 | INFO  | Task 6d887c72-5d25-4b72-9537-81625c762a32 is in state STARTED 2026-03-28 01:01:01.723663 | orchestrator | 2026-03-28 01:01:01 | INFO  | Task 4fe688db-fb3f-439f-a5cd-1cda7ff0064e is in state STARTED 2026-03-28 01:01:01.724771 | orchestrator | 2026-03-28 01:01:01 | INFO  | Task 4d3b3ce2-a3d6-4a93-b81e-a1cb32536b3e is in state STARTED 2026-03-28 01:01:01.725813 | orchestrator | 2026-03-28 01:01:01 | INFO  | Task 0e983d62-7caa-40e4-b401-090a83c48638 is in state STARTED 2026-03-28 01:01:01.725870 | orchestrator | 2026-03-28 01:01:01 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:01:04.777172 | orchestrator | 2026-03-28 01:01:04 | INFO  | Task 9a7ff00d-5862-4680-b4ff-b2aa3982eb12 is in state STARTED 2026-03-28 01:01:04.780229 | orchestrator | 2026-03-28 01:01:04 | INFO  | Task 6d887c72-5d25-4b72-9537-81625c762a32 is in state STARTED 2026-03-28 01:01:04.784880 | orchestrator | 2026-03-28 01:01:04 | INFO  | Task 4fe688db-fb3f-439f-a5cd-1cda7ff0064e is in state STARTED 2026-03-28 01:01:04.786993 | orchestrator | 2026-03-28 01:01:04 | INFO  | Task 4d3b3ce2-a3d6-4a93-b81e-a1cb32536b3e is in state STARTED 2026-03-28 01:01:04.789197 | orchestrator | 2026-03-28 01:01:04 | INFO  | Task 0e983d62-7caa-40e4-b401-090a83c48638 is in state STARTED 2026-03-28 01:01:04.789274 | orchestrator | 2026-03-28 01:01:04 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:01:07.834595 | orchestrator | 2026-03-28 01:01:07 | INFO  | Task 9a7ff00d-5862-4680-b4ff-b2aa3982eb12 is in state STARTED 2026-03-28 01:01:07.834704 | orchestrator | 2026-03-28 01:01:07 | INFO  | Task 6d887c72-5d25-4b72-9537-81625c762a32 is in state STARTED 2026-03-28 01:01:07.835691 | orchestrator | 2026-03-28 01:01:07 | INFO  | Task 4fe688db-fb3f-439f-a5cd-1cda7ff0064e is in state STARTED 2026-03-28 01:01:07.836613 | orchestrator | 2026-03-28 01:01:07 | INFO  | Task 4d3b3ce2-a3d6-4a93-b81e-a1cb32536b3e is in state STARTED 2026-03-28 01:01:07.839386 | orchestrator | 2026-03-28 01:01:07 | INFO  | Task 0e983d62-7caa-40e4-b401-090a83c48638 is in state STARTED 2026-03-28 01:01:07.839431 | orchestrator | 2026-03-28 01:01:07 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:01:10.872288 | orchestrator | 2026-03-28 01:01:10 | INFO  | Task 9a7ff00d-5862-4680-b4ff-b2aa3982eb12 is in state STARTED 2026-03-28 01:01:10.872696 | orchestrator | 2026-03-28 01:01:10 | INFO  | Task 6d887c72-5d25-4b72-9537-81625c762a32 is in state STARTED 2026-03-28 01:01:10.873506 | orchestrator | 2026-03-28 01:01:10 | INFO  | Task 4fe688db-fb3f-439f-a5cd-1cda7ff0064e is in state STARTED 2026-03-28 01:01:10.874251 | orchestrator | 2026-03-28 01:01:10 | INFO  | Task 4d3b3ce2-a3d6-4a93-b81e-a1cb32536b3e is in state STARTED 2026-03-28 01:01:10.877504 | orchestrator | 2026-03-28 01:01:10 | INFO  | Task 0e983d62-7caa-40e4-b401-090a83c48638 is in state STARTED 2026-03-28 01:01:10.877552 | orchestrator | 2026-03-28 01:01:10 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:01:13.918838 | orchestrator | 2026-03-28 01:01:13 | INFO  | Task 9a7ff00d-5862-4680-b4ff-b2aa3982eb12 is in state STARTED 2026-03-28 01:01:13.920614 | orchestrator | 2026-03-28 01:01:13 | INFO  | Task 6d887c72-5d25-4b72-9537-81625c762a32 is in state STARTED 2026-03-28 01:01:13.923256 | orchestrator | 2026-03-28 01:01:13 | INFO  | Task 4fe688db-fb3f-439f-a5cd-1cda7ff0064e is in state STARTED 2026-03-28 01:01:13.924881 | orchestrator | 2026-03-28 01:01:13 | INFO  | Task 4d3b3ce2-a3d6-4a93-b81e-a1cb32536b3e is in state STARTED 2026-03-28 01:01:13.927556 | orchestrator | 2026-03-28 01:01:13 | INFO  | Task 0e983d62-7caa-40e4-b401-090a83c48638 is in state STARTED 2026-03-28 01:01:13.927638 | orchestrator | 2026-03-28 01:01:13 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:01:16.968854 | orchestrator | 2026-03-28 01:01:16 | INFO  | Task 9a7ff00d-5862-4680-b4ff-b2aa3982eb12 is in state STARTED 2026-03-28 01:01:16.970964 | orchestrator | 2026-03-28 01:01:16 | INFO  | Task 6d887c72-5d25-4b72-9537-81625c762a32 is in state STARTED 2026-03-28 01:01:16.971690 | orchestrator | 2026-03-28 01:01:16 | INFO  | Task 4fe688db-fb3f-439f-a5cd-1cda7ff0064e is in state STARTED 2026-03-28 01:01:16.975491 | orchestrator | 2026-03-28 01:01:16 | INFO  | Task 4d3b3ce2-a3d6-4a93-b81e-a1cb32536b3e is in state STARTED 2026-03-28 01:01:16.976870 | orchestrator | 2026-03-28 01:01:16 | INFO  | Task 0e983d62-7caa-40e4-b401-090a83c48638 is in state STARTED 2026-03-28 01:01:16.976929 | orchestrator | 2026-03-28 01:01:16 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:01:20.019569 | orchestrator | 2026-03-28 01:01:20 | INFO  | Task 9a7ff00d-5862-4680-b4ff-b2aa3982eb12 is in state STARTED 2026-03-28 01:01:20.023281 | orchestrator | 2026-03-28 01:01:20 | INFO  | Task 6d887c72-5d25-4b72-9537-81625c762a32 is in state STARTED 2026-03-28 01:01:20.024371 | orchestrator | 2026-03-28 01:01:20 | INFO  | Task 4fe688db-fb3f-439f-a5cd-1cda7ff0064e is in state STARTED 2026-03-28 01:01:20.026429 | orchestrator | 2026-03-28 01:01:20 | INFO  | Task 4d3b3ce2-a3d6-4a93-b81e-a1cb32536b3e is in state STARTED 2026-03-28 01:01:20.027950 | orchestrator | 2026-03-28 01:01:20 | INFO  | Task 0e983d62-7caa-40e4-b401-090a83c48638 is in state STARTED 2026-03-28 01:01:20.027980 | orchestrator | 2026-03-28 01:01:20 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:01:23.083547 | orchestrator | 2026-03-28 01:01:23 | INFO  | Task 9a7ff00d-5862-4680-b4ff-b2aa3982eb12 is in state STARTED 2026-03-28 01:01:23.083630 | orchestrator | 2026-03-28 01:01:23 | INFO  | Task 6d887c72-5d25-4b72-9537-81625c762a32 is in state STARTED 2026-03-28 01:01:23.083639 | orchestrator | 2026-03-28 01:01:23 | INFO  | Task 4fe688db-fb3f-439f-a5cd-1cda7ff0064e is in state STARTED 2026-03-28 01:01:23.083646 | orchestrator | 2026-03-28 01:01:23 | INFO  | Task 4d3b3ce2-a3d6-4a93-b81e-a1cb32536b3e is in state STARTED 2026-03-28 01:01:23.083651 | orchestrator | 2026-03-28 01:01:23 | INFO  | Task 0e983d62-7caa-40e4-b401-090a83c48638 is in state STARTED 2026-03-28 01:01:23.083658 | orchestrator | 2026-03-28 01:01:23 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:01:26.113606 | orchestrator | 2026-03-28 01:01:26 | INFO  | Task 9a7ff00d-5862-4680-b4ff-b2aa3982eb12 is in state STARTED 2026-03-28 01:01:26.114641 | orchestrator | 2026-03-28 01:01:26 | INFO  | Task 6d887c72-5d25-4b72-9537-81625c762a32 is in state STARTED 2026-03-28 01:01:26.116340 | orchestrator | 2026-03-28 01:01:26 | INFO  | Task 4fe688db-fb3f-439f-a5cd-1cda7ff0064e is in state STARTED 2026-03-28 01:01:26.118998 | orchestrator | 2026-03-28 01:01:26 | INFO  | Task 4d3b3ce2-a3d6-4a93-b81e-a1cb32536b3e is in state STARTED 2026-03-28 01:01:26.120132 | orchestrator | 2026-03-28 01:01:26 | INFO  | Task 0e983d62-7caa-40e4-b401-090a83c48638 is in state STARTED 2026-03-28 01:01:26.120177 | orchestrator | 2026-03-28 01:01:26 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:01:29.162699 | orchestrator | 2026-03-28 01:01:29 | INFO  | Task 9a7ff00d-5862-4680-b4ff-b2aa3982eb12 is in state STARTED 2026-03-28 01:01:29.166298 | orchestrator | 2026-03-28 01:01:29 | INFO  | Task 6d887c72-5d25-4b72-9537-81625c762a32 is in state STARTED 2026-03-28 01:01:29.169475 | orchestrator | 2026-03-28 01:01:29 | INFO  | Task 4fe688db-fb3f-439f-a5cd-1cda7ff0064e is in state STARTED 2026-03-28 01:01:29.171563 | orchestrator | 2026-03-28 01:01:29 | INFO  | Task 4d3b3ce2-a3d6-4a93-b81e-a1cb32536b3e is in state STARTED 2026-03-28 01:01:29.173444 | orchestrator | 2026-03-28 01:01:29 | INFO  | Task 0e983d62-7caa-40e4-b401-090a83c48638 is in state STARTED 2026-03-28 01:01:29.173508 | orchestrator | 2026-03-28 01:01:29 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:01:32.222632 | orchestrator | 2026-03-28 01:01:32 | INFO  | Task 9a7ff00d-5862-4680-b4ff-b2aa3982eb12 is in state STARTED 2026-03-28 01:01:32.222700 | orchestrator | 2026-03-28 01:01:32 | INFO  | Task 6d887c72-5d25-4b72-9537-81625c762a32 is in state STARTED 2026-03-28 01:01:32.222706 | orchestrator | 2026-03-28 01:01:32 | INFO  | Task 4fe688db-fb3f-439f-a5cd-1cda7ff0064e is in state STARTED 2026-03-28 01:01:32.223626 | orchestrator | 2026-03-28 01:01:32 | INFO  | Task 4d3b3ce2-a3d6-4a93-b81e-a1cb32536b3e is in state STARTED 2026-03-28 01:01:32.225170 | orchestrator | 2026-03-28 01:01:32 | INFO  | Task 0e983d62-7caa-40e4-b401-090a83c48638 is in state STARTED 2026-03-28 01:01:32.225201 | orchestrator | 2026-03-28 01:01:32 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:01:35.270793 | orchestrator | 2026-03-28 01:01:35 | INFO  | Task 9a7ff00d-5862-4680-b4ff-b2aa3982eb12 is in state STARTED 2026-03-28 01:01:35.272276 | orchestrator | 2026-03-28 01:01:35 | INFO  | Task 6d887c72-5d25-4b72-9537-81625c762a32 is in state STARTED 2026-03-28 01:01:35.274323 | orchestrator | 2026-03-28 01:01:35 | INFO  | Task 4fe688db-fb3f-439f-a5cd-1cda7ff0064e is in state STARTED 2026-03-28 01:01:35.275624 | orchestrator | 2026-03-28 01:01:35 | INFO  | Task 4d3b3ce2-a3d6-4a93-b81e-a1cb32536b3e is in state STARTED 2026-03-28 01:01:35.277191 | orchestrator | 2026-03-28 01:01:35 | INFO  | Task 0e983d62-7caa-40e4-b401-090a83c48638 is in state STARTED 2026-03-28 01:01:35.277227 | orchestrator | 2026-03-28 01:01:35 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:01:38.331138 | orchestrator | 2026-03-28 01:01:38 | INFO  | Task 9a7ff00d-5862-4680-b4ff-b2aa3982eb12 is in state STARTED 2026-03-28 01:01:38.334122 | orchestrator | 2026-03-28 01:01:38 | INFO  | Task 6d887c72-5d25-4b72-9537-81625c762a32 is in state STARTED 2026-03-28 01:01:38.338798 | orchestrator | 2026-03-28 01:01:38 | INFO  | Task 4fe688db-fb3f-439f-a5cd-1cda7ff0064e is in state STARTED 2026-03-28 01:01:38.342149 | orchestrator | 2026-03-28 01:01:38 | INFO  | Task 4d3b3ce2-a3d6-4a93-b81e-a1cb32536b3e is in state STARTED 2026-03-28 01:01:38.344008 | orchestrator | 2026-03-28 01:01:38 | INFO  | Task 0e983d62-7caa-40e4-b401-090a83c48638 is in state STARTED 2026-03-28 01:01:38.344405 | orchestrator | 2026-03-28 01:01:38 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:01:41.389518 | orchestrator | 2026-03-28 01:01:41 | INFO  | Task 9a7ff00d-5862-4680-b4ff-b2aa3982eb12 is in state SUCCESS 2026-03-28 01:01:41.390346 | orchestrator | 2026-03-28 01:01:41.390387 | orchestrator | 2026-03-28 01:01:41.390399 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-03-28 01:01:41.390411 | orchestrator | 2026-03-28 01:01:41.390422 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-03-28 01:01:41.390434 | orchestrator | Saturday 28 March 2026 00:59:51 +0000 (0:00:00.330) 0:00:00.330 ******** 2026-03-28 01:01:41.390445 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-03-28 01:01:41.390458 | orchestrator | 2026-03-28 01:01:41.390469 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-03-28 01:01:41.390480 | orchestrator | Saturday 28 March 2026 00:59:51 +0000 (0:00:00.386) 0:00:00.716 ******** 2026-03-28 01:01:41.390491 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-03-28 01:01:41.390502 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-03-28 01:01:41.390513 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-03-28 01:01:41.390524 | orchestrator | 2026-03-28 01:01:41.390535 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-03-28 01:01:41.390546 | orchestrator | Saturday 28 March 2026 00:59:53 +0000 (0:00:01.674) 0:00:02.391 ******** 2026-03-28 01:01:41.390557 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-03-28 01:01:41.390567 | orchestrator | 2026-03-28 01:01:41.390578 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-03-28 01:01:41.390589 | orchestrator | Saturday 28 March 2026 00:59:54 +0000 (0:00:01.363) 0:00:03.754 ******** 2026-03-28 01:01:41.390600 | orchestrator | changed: [testbed-manager] 2026-03-28 01:01:41.390610 | orchestrator | 2026-03-28 01:01:41.390622 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-03-28 01:01:41.390633 | orchestrator | Saturday 28 March 2026 00:59:55 +0000 (0:00:01.008) 0:00:04.763 ******** 2026-03-28 01:01:41.390644 | orchestrator | changed: [testbed-manager] 2026-03-28 01:01:41.390654 | orchestrator | 2026-03-28 01:01:41.390665 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-03-28 01:01:41.390695 | orchestrator | Saturday 28 March 2026 00:59:56 +0000 (0:00:00.952) 0:00:05.715 ******** 2026-03-28 01:01:41.390706 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-03-28 01:01:41.390789 | orchestrator | ok: [testbed-manager] 2026-03-28 01:01:41.390804 | orchestrator | 2026-03-28 01:01:41.390815 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-03-28 01:01:41.390826 | orchestrator | Saturday 28 March 2026 01:00:34 +0000 (0:00:37.926) 0:00:43.642 ******** 2026-03-28 01:01:41.390837 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-03-28 01:01:41.390848 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-03-28 01:01:41.390859 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-03-28 01:01:41.390869 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-03-28 01:01:41.390880 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-03-28 01:01:41.390891 | orchestrator | 2026-03-28 01:01:41.390901 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-03-28 01:01:41.390912 | orchestrator | Saturday 28 March 2026 01:00:38 +0000 (0:00:04.317) 0:00:47.959 ******** 2026-03-28 01:01:41.390923 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-03-28 01:01:41.390934 | orchestrator | 2026-03-28 01:01:41.390945 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-03-28 01:01:41.390957 | orchestrator | Saturday 28 March 2026 01:00:39 +0000 (0:00:00.645) 0:00:48.605 ******** 2026-03-28 01:01:41.390970 | orchestrator | skipping: [testbed-manager] 2026-03-28 01:01:41.390982 | orchestrator | 2026-03-28 01:01:41.390995 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-03-28 01:01:41.391007 | orchestrator | Saturday 28 March 2026 01:00:39 +0000 (0:00:00.124) 0:00:48.729 ******** 2026-03-28 01:01:41.391019 | orchestrator | skipping: [testbed-manager] 2026-03-28 01:01:41.391031 | orchestrator | 2026-03-28 01:01:41.391043 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-03-28 01:01:41.391055 | orchestrator | Saturday 28 March 2026 01:00:40 +0000 (0:00:00.347) 0:00:49.077 ******** 2026-03-28 01:01:41.391098 | orchestrator | changed: [testbed-manager] 2026-03-28 01:01:41.391117 | orchestrator | 2026-03-28 01:01:41.391134 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-03-28 01:01:41.391152 | orchestrator | Saturday 28 March 2026 01:00:41 +0000 (0:00:01.541) 0:00:50.618 ******** 2026-03-28 01:01:41.391171 | orchestrator | changed: [testbed-manager] 2026-03-28 01:01:41.391188 | orchestrator | 2026-03-28 01:01:41.391202 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-03-28 01:01:41.391215 | orchestrator | Saturday 28 March 2026 01:00:42 +0000 (0:00:00.829) 0:00:51.447 ******** 2026-03-28 01:01:41.391227 | orchestrator | changed: [testbed-manager] 2026-03-28 01:01:41.391240 | orchestrator | 2026-03-28 01:01:41.391252 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-03-28 01:01:41.391264 | orchestrator | Saturday 28 March 2026 01:00:43 +0000 (0:00:00.677) 0:00:52.125 ******** 2026-03-28 01:01:41.391277 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-03-28 01:01:41.391289 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-03-28 01:01:41.391300 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-03-28 01:01:41.391310 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-03-28 01:01:41.391321 | orchestrator | 2026-03-28 01:01:41.391332 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:01:41.391343 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 01:01:41.391356 | orchestrator | 2026-03-28 01:01:41.391366 | orchestrator | 2026-03-28 01:01:41.391391 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:01:41.391403 | orchestrator | Saturday 28 March 2026 01:00:44 +0000 (0:00:01.647) 0:00:53.773 ******** 2026-03-28 01:01:41.391414 | orchestrator | =============================================================================== 2026-03-28 01:01:41.391425 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 37.93s 2026-03-28 01:01:41.391448 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.32s 2026-03-28 01:01:41.391459 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.67s 2026-03-28 01:01:41.391469 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.65s 2026-03-28 01:01:41.391480 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.54s 2026-03-28 01:01:41.391490 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.36s 2026-03-28 01:01:41.391501 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.01s 2026-03-28 01:01:41.391512 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.95s 2026-03-28 01:01:41.391522 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.83s 2026-03-28 01:01:41.391533 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.68s 2026-03-28 01:01:41.391543 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.65s 2026-03-28 01:01:41.391554 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.39s 2026-03-28 01:01:41.391564 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.35s 2026-03-28 01:01:41.391575 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.12s 2026-03-28 01:01:41.391585 | orchestrator | 2026-03-28 01:01:41.391596 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-28 01:01:41.391607 | orchestrator | 2.16.14 2026-03-28 01:01:41.391618 | orchestrator | 2026-03-28 01:01:41.391628 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-03-28 01:01:41.391639 | orchestrator | 2026-03-28 01:01:41.391649 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-03-28 01:01:41.391667 | orchestrator | Saturday 28 March 2026 01:00:49 +0000 (0:00:00.278) 0:00:00.278 ******** 2026-03-28 01:01:41.391678 | orchestrator | changed: [testbed-manager] 2026-03-28 01:01:41.391689 | orchestrator | 2026-03-28 01:01:41.391699 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-03-28 01:01:41.391710 | orchestrator | Saturday 28 March 2026 01:00:51 +0000 (0:00:01.743) 0:00:02.022 ******** 2026-03-28 01:01:41.391721 | orchestrator | changed: [testbed-manager] 2026-03-28 01:01:41.391731 | orchestrator | 2026-03-28 01:01:41.391742 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-03-28 01:01:41.391753 | orchestrator | Saturday 28 March 2026 01:00:53 +0000 (0:00:01.476) 0:00:03.499 ******** 2026-03-28 01:01:41.391764 | orchestrator | changed: [testbed-manager] 2026-03-28 01:01:41.391774 | orchestrator | 2026-03-28 01:01:41.391785 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-03-28 01:01:41.391796 | orchestrator | Saturday 28 March 2026 01:00:54 +0000 (0:00:01.296) 0:00:04.795 ******** 2026-03-28 01:01:41.391845 | orchestrator | changed: [testbed-manager] 2026-03-28 01:01:41.391858 | orchestrator | 2026-03-28 01:01:41.391869 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-03-28 01:01:41.391880 | orchestrator | Saturday 28 March 2026 01:00:55 +0000 (0:00:01.288) 0:00:06.084 ******** 2026-03-28 01:01:41.391890 | orchestrator | changed: [testbed-manager] 2026-03-28 01:01:41.391901 | orchestrator | 2026-03-28 01:01:41.391912 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-03-28 01:01:41.391922 | orchestrator | Saturday 28 March 2026 01:00:57 +0000 (0:00:01.593) 0:00:07.678 ******** 2026-03-28 01:01:41.391933 | orchestrator | changed: [testbed-manager] 2026-03-28 01:01:41.391944 | orchestrator | 2026-03-28 01:01:41.391955 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-03-28 01:01:41.391965 | orchestrator | Saturday 28 March 2026 01:00:58 +0000 (0:00:01.310) 0:00:08.988 ******** 2026-03-28 01:01:41.391976 | orchestrator | changed: [testbed-manager] 2026-03-28 01:01:41.391987 | orchestrator | 2026-03-28 01:01:41.391997 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-03-28 01:01:41.392016 | orchestrator | Saturday 28 March 2026 01:01:00 +0000 (0:00:02.230) 0:00:11.218 ******** 2026-03-28 01:01:41.392027 | orchestrator | changed: [testbed-manager] 2026-03-28 01:01:41.392038 | orchestrator | 2026-03-28 01:01:41.392048 | orchestrator | TASK [Create admin user] ******************************************************* 2026-03-28 01:01:41.392059 | orchestrator | Saturday 28 March 2026 01:01:02 +0000 (0:00:01.782) 0:00:13.001 ******** 2026-03-28 01:01:41.392152 | orchestrator | changed: [testbed-manager] 2026-03-28 01:01:41.392171 | orchestrator | 2026-03-28 01:01:41.392188 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-03-28 01:01:41.392204 | orchestrator | Saturday 28 March 2026 01:01:12 +0000 (0:00:10.205) 0:00:23.206 ******** 2026-03-28 01:01:41.392219 | orchestrator | skipping: [testbed-manager] 2026-03-28 01:01:41.392234 | orchestrator | 2026-03-28 01:01:41.392249 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-28 01:01:41.392265 | orchestrator | 2026-03-28 01:01:41.392281 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-28 01:01:41.392297 | orchestrator | Saturday 28 March 2026 01:01:13 +0000 (0:00:00.220) 0:00:23.427 ******** 2026-03-28 01:01:41.392312 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:01:41.392327 | orchestrator | 2026-03-28 01:01:41.392342 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-28 01:01:41.392358 | orchestrator | 2026-03-28 01:01:41.392375 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-28 01:01:41.392390 | orchestrator | Saturday 28 March 2026 01:01:15 +0000 (0:00:01.959) 0:00:25.387 ******** 2026-03-28 01:01:41.392404 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:01:41.392419 | orchestrator | 2026-03-28 01:01:41.392447 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-28 01:01:41.392459 | orchestrator | 2026-03-28 01:01:41.392470 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-28 01:01:41.392481 | orchestrator | Saturday 28 March 2026 01:01:27 +0000 (0:00:12.524) 0:00:37.911 ******** 2026-03-28 01:01:41.392492 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:01:41.392502 | orchestrator | 2026-03-28 01:01:41.392513 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:01:41.392524 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-28 01:01:41.392535 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 01:01:41.392547 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 01:01:41.392557 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 01:01:41.392568 | orchestrator | 2026-03-28 01:01:41.392579 | orchestrator | 2026-03-28 01:01:41.392590 | orchestrator | 2026-03-28 01:01:41.392600 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:01:41.392611 | orchestrator | Saturday 28 March 2026 01:01:39 +0000 (0:00:11.628) 0:00:49.540 ******** 2026-03-28 01:01:41.392621 | orchestrator | =============================================================================== 2026-03-28 01:01:41.392632 | orchestrator | Restart ceph manager service ------------------------------------------- 26.11s 2026-03-28 01:01:41.392643 | orchestrator | Create admin user ------------------------------------------------------ 10.21s 2026-03-28 01:01:41.392653 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.23s 2026-03-28 01:01:41.392664 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.78s 2026-03-28 01:01:41.392683 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.74s 2026-03-28 01:01:41.392694 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.59s 2026-03-28 01:01:41.392714 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.48s 2026-03-28 01:01:41.392723 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.31s 2026-03-28 01:01:41.392733 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.30s 2026-03-28 01:01:41.392742 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.29s 2026-03-28 01:01:41.392752 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.22s 2026-03-28 01:01:41.392761 | orchestrator | 2026-03-28 01:01:41 | INFO  | Task 6d887c72-5d25-4b72-9537-81625c762a32 is in state STARTED 2026-03-28 01:01:41.392861 | orchestrator | 2026-03-28 01:01:41 | INFO  | Task 4fe688db-fb3f-439f-a5cd-1cda7ff0064e is in state STARTED 2026-03-28 01:01:41.392874 | orchestrator | 2026-03-28 01:01:41 | INFO  | Task 4d3b3ce2-a3d6-4a93-b81e-a1cb32536b3e is in state STARTED 2026-03-28 01:01:41.393016 | orchestrator | 2026-03-28 01:01:41 | INFO  | Task 0e983d62-7caa-40e4-b401-090a83c48638 is in state STARTED 2026-03-28 01:01:41.393912 | orchestrator | 2026-03-28 01:01:41 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:01:44.439399 | orchestrator | 2026-03-28 01:01:44 | INFO  | Task 6d887c72-5d25-4b72-9537-81625c762a32 is in state STARTED 2026-03-28 01:01:44.439501 | orchestrator | 2026-03-28 01:01:44 | INFO  | Task 4fe688db-fb3f-439f-a5cd-1cda7ff0064e is in state STARTED 2026-03-28 01:01:44.439668 | orchestrator | 2026-03-28 01:01:44 | INFO  | Task 4d3b3ce2-a3d6-4a93-b81e-a1cb32536b3e is in state STARTED 2026-03-28 01:01:44.442643 | orchestrator | 2026-03-28 01:01:44 | INFO  | Task 0e983d62-7caa-40e4-b401-090a83c48638 is in state STARTED 2026-03-28 01:01:44.442678 | orchestrator | 2026-03-28 01:01:44 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:01:47.487403 | orchestrator | 2026-03-28 01:01:47 | INFO  | Task 6d887c72-5d25-4b72-9537-81625c762a32 is in state STARTED 2026-03-28 01:01:47.490285 | orchestrator | 2026-03-28 01:01:47 | INFO  | Task 4fe688db-fb3f-439f-a5cd-1cda7ff0064e is in state STARTED 2026-03-28 01:01:47.492336 | orchestrator | 2026-03-28 01:01:47 | INFO  | Task 4d3b3ce2-a3d6-4a93-b81e-a1cb32536b3e is in state STARTED 2026-03-28 01:01:47.493978 | orchestrator | 2026-03-28 01:01:47 | INFO  | Task 0e983d62-7caa-40e4-b401-090a83c48638 is in state STARTED 2026-03-28 01:01:47.494086 | orchestrator | 2026-03-28 01:01:47 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:01:50.533608 | orchestrator | 2026-03-28 01:01:50 | INFO  | Task 6d887c72-5d25-4b72-9537-81625c762a32 is in state STARTED 2026-03-28 01:01:50.534631 | orchestrator | 2026-03-28 01:01:50 | INFO  | Task 4fe688db-fb3f-439f-a5cd-1cda7ff0064e is in state STARTED 2026-03-28 01:01:50.536165 | orchestrator | 2026-03-28 01:01:50 | INFO  | Task 4d3b3ce2-a3d6-4a93-b81e-a1cb32536b3e is in state STARTED 2026-03-28 01:01:50.537148 | orchestrator | 2026-03-28 01:01:50 | INFO  | Task 0e983d62-7caa-40e4-b401-090a83c48638 is in state STARTED 2026-03-28 01:01:50.537811 | orchestrator | 2026-03-28 01:01:50 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:01:53.586336 | orchestrator | 2026-03-28 01:01:53 | INFO  | Task 6d887c72-5d25-4b72-9537-81625c762a32 is in state STARTED 2026-03-28 01:01:53.587132 | orchestrator | 2026-03-28 01:01:53 | INFO  | Task 4fe688db-fb3f-439f-a5cd-1cda7ff0064e is in state STARTED 2026-03-28 01:01:53.588289 | orchestrator | 2026-03-28 01:01:53 | INFO  | Task 4d3b3ce2-a3d6-4a93-b81e-a1cb32536b3e is in state STARTED 2026-03-28 01:01:53.589263 | orchestrator | 2026-03-28 01:01:53 | INFO  | Task 0e983d62-7caa-40e4-b401-090a83c48638 is in state STARTED 2026-03-28 01:01:53.589302 | orchestrator | 2026-03-28 01:01:53 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:01:56.634595 | orchestrator | 2026-03-28 01:01:56 | INFO  | Task 6d887c72-5d25-4b72-9537-81625c762a32 is in state STARTED 2026-03-28 01:01:56.637262 | orchestrator | 2026-03-28 01:01:56 | INFO  | Task 4fe688db-fb3f-439f-a5cd-1cda7ff0064e is in state STARTED 2026-03-28 01:01:56.640263 | orchestrator | 2026-03-28 01:01:56 | INFO  | Task 4d3b3ce2-a3d6-4a93-b81e-a1cb32536b3e is in state SUCCESS 2026-03-28 01:01:56.641697 | orchestrator | 2026-03-28 01:01:56.641745 | orchestrator | 2026-03-28 01:01:56.641757 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 01:01:56.641768 | orchestrator | 2026-03-28 01:01:56.641778 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 01:01:56.641789 | orchestrator | Saturday 28 March 2026 00:59:53 +0000 (0:00:00.394) 0:00:00.394 ******** 2026-03-28 01:01:56.641799 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:01:56.641810 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:01:56.641820 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:01:56.641829 | orchestrator | 2026-03-28 01:01:56.641855 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 01:01:56.641866 | orchestrator | Saturday 28 March 2026 00:59:53 +0000 (0:00:00.404) 0:00:00.799 ******** 2026-03-28 01:01:56.641876 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-03-28 01:01:56.641886 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-03-28 01:01:56.641896 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-03-28 01:01:56.641905 | orchestrator | 2026-03-28 01:01:56.641915 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-03-28 01:01:56.641924 | orchestrator | 2026-03-28 01:01:56.641934 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-28 01:01:56.641944 | orchestrator | Saturday 28 March 2026 00:59:53 +0000 (0:00:00.330) 0:00:01.129 ******** 2026-03-28 01:01:56.641953 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:01:56.641963 | orchestrator | 2026-03-28 01:01:56.641973 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-03-28 01:01:56.641982 | orchestrator | Saturday 28 March 2026 00:59:54 +0000 (0:00:00.882) 0:00:02.012 ******** 2026-03-28 01:01:56.641997 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-28 01:01:56.642138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-28 01:01:56.642164 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-28 01:01:56.642194 | orchestrator | 2026-03-28 01:01:56.642315 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-03-28 01:01:56.642330 | orchestrator | Saturday 28 March 2026 00:59:56 +0000 (0:00:01.781) 0:00:03.794 ******** 2026-03-28 01:01:56.642342 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:01:56.642354 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:01:56.642365 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:01:56.642376 | orchestrator | 2026-03-28 01:01:56.642387 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-28 01:01:56.642406 | orchestrator | Saturday 28 March 2026 00:59:56 +0000 (0:00:00.297) 0:00:04.091 ******** 2026-03-28 01:01:56.642418 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-28 01:01:56.642430 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-28 01:01:56.642442 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-03-28 01:01:56.642459 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-03-28 01:01:56.642471 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-03-28 01:01:56.642481 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-03-28 01:01:56.642493 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-03-28 01:01:56.642504 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-03-28 01:01:56.642515 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-28 01:01:56.642526 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-28 01:01:56.642537 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-03-28 01:01:56.642549 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-03-28 01:01:56.642560 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-03-28 01:01:56.642570 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-03-28 01:01:56.642580 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-03-28 01:01:56.642589 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-03-28 01:01:56.642599 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-28 01:01:56.642608 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-28 01:01:56.642618 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-03-28 01:01:56.642627 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-03-28 01:01:56.642637 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-03-28 01:01:56.642654 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-03-28 01:01:56.642667 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-03-28 01:01:56.642682 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-03-28 01:01:56.642700 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-03-28 01:01:56.642718 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-03-28 01:01:56.642734 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-03-28 01:01:56.642751 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-03-28 01:01:56.642766 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-03-28 01:01:56.642782 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-03-28 01:01:56.642795 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-03-28 01:01:56.642811 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-03-28 01:01:56.642828 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-03-28 01:01:56.642845 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-03-28 01:01:56.642862 | orchestrator | 2026-03-28 01:01:56.642876 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-28 01:01:56.642886 | orchestrator | Saturday 28 March 2026 00:59:57 +0000 (0:00:01.009) 0:00:05.101 ******** 2026-03-28 01:01:56.642895 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:01:56.642905 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:01:56.642914 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:01:56.642924 | orchestrator | 2026-03-28 01:01:56.642942 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-28 01:01:56.642952 | orchestrator | Saturday 28 March 2026 00:59:58 +0000 (0:00:00.320) 0:00:05.422 ******** 2026-03-28 01:01:56.642961 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:01:56.642972 | orchestrator | 2026-03-28 01:01:56.642981 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-28 01:01:56.642991 | orchestrator | Saturday 28 March 2026 00:59:58 +0000 (0:00:00.142) 0:00:05.564 ******** 2026-03-28 01:01:56.643007 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:01:56.643018 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:01:56.643027 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:01:56.643037 | orchestrator | 2026-03-28 01:01:56.643122 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-28 01:01:56.643137 | orchestrator | Saturday 28 March 2026 00:59:58 +0000 (0:00:00.304) 0:00:05.869 ******** 2026-03-28 01:01:56.643146 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:01:56.643156 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:01:56.643165 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:01:56.643175 | orchestrator | 2026-03-28 01:01:56.643184 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-28 01:01:56.643194 | orchestrator | Saturday 28 March 2026 00:59:59 +0000 (0:00:00.298) 0:00:06.167 ******** 2026-03-28 01:01:56.643213 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:01:56.643222 | orchestrator | 2026-03-28 01:01:56.643266 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-28 01:01:56.643278 | orchestrator | Saturday 28 March 2026 00:59:59 +0000 (0:00:00.152) 0:00:06.319 ******** 2026-03-28 01:01:56.643288 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:01:56.643298 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:01:56.643308 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:01:56.643318 | orchestrator | 2026-03-28 01:01:56.643327 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-28 01:01:56.643337 | orchestrator | Saturday 28 March 2026 00:59:59 +0000 (0:00:00.622) 0:00:06.942 ******** 2026-03-28 01:01:56.643346 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:01:56.643356 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:01:56.643366 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:01:56.643375 | orchestrator | 2026-03-28 01:01:56.643385 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-28 01:01:56.643395 | orchestrator | Saturday 28 March 2026 01:00:00 +0000 (0:00:00.830) 0:00:07.772 ******** 2026-03-28 01:01:56.643405 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:01:56.643415 | orchestrator | 2026-03-28 01:01:56.643424 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-28 01:01:56.643434 | orchestrator | Saturday 28 March 2026 01:00:00 +0000 (0:00:00.181) 0:00:07.954 ******** 2026-03-28 01:01:56.643444 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:01:56.643453 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:01:56.643463 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:01:56.643473 | orchestrator | 2026-03-28 01:01:56.643482 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-28 01:01:56.643492 | orchestrator | Saturday 28 March 2026 01:00:01 +0000 (0:00:00.308) 0:00:08.262 ******** 2026-03-28 01:01:56.643502 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:01:56.643511 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:01:56.643521 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:01:56.643530 | orchestrator | 2026-03-28 01:01:56.643540 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-28 01:01:56.643550 | orchestrator | Saturday 28 March 2026 01:00:01 +0000 (0:00:00.402) 0:00:08.665 ******** 2026-03-28 01:01:56.643559 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:01:56.643569 | orchestrator | 2026-03-28 01:01:56.643578 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-28 01:01:56.643588 | orchestrator | Saturday 28 March 2026 01:00:01 +0000 (0:00:00.141) 0:00:08.807 ******** 2026-03-28 01:01:56.643598 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:01:56.643607 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:01:56.643617 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:01:56.643626 | orchestrator | 2026-03-28 01:01:56.643636 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-28 01:01:56.643646 | orchestrator | Saturday 28 March 2026 01:00:02 +0000 (0:00:00.542) 0:00:09.350 ******** 2026-03-28 01:01:56.643655 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:01:56.643665 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:01:56.643674 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:01:56.643684 | orchestrator | 2026-03-28 01:01:56.643694 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-28 01:01:56.643703 | orchestrator | Saturday 28 March 2026 01:00:02 +0000 (0:00:00.355) 0:00:09.705 ******** 2026-03-28 01:01:56.643713 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:01:56.643722 | orchestrator | 2026-03-28 01:01:56.643732 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-28 01:01:56.643741 | orchestrator | Saturday 28 March 2026 01:00:02 +0000 (0:00:00.138) 0:00:09.844 ******** 2026-03-28 01:01:56.643751 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:01:56.643760 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:01:56.643777 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:01:56.643786 | orchestrator | 2026-03-28 01:01:56.643796 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-28 01:01:56.643805 | orchestrator | Saturday 28 March 2026 01:00:03 +0000 (0:00:00.336) 0:00:10.180 ******** 2026-03-28 01:01:56.643815 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:01:56.643825 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:01:56.643834 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:01:56.643844 | orchestrator | 2026-03-28 01:01:56.643854 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-28 01:01:56.643869 | orchestrator | Saturday 28 March 2026 01:00:03 +0000 (0:00:00.554) 0:00:10.735 ******** 2026-03-28 01:01:56.643886 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:01:56.643903 | orchestrator | 2026-03-28 01:01:56.643918 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-28 01:01:56.643936 | orchestrator | Saturday 28 March 2026 01:00:03 +0000 (0:00:00.160) 0:00:10.895 ******** 2026-03-28 01:01:56.643951 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:01:56.643968 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:01:56.643994 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:01:56.644010 | orchestrator | 2026-03-28 01:01:56.644024 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-28 01:01:56.644038 | orchestrator | Saturday 28 March 2026 01:00:04 +0000 (0:00:00.345) 0:00:11.241 ******** 2026-03-28 01:01:56.644078 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:01:56.644095 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:01:56.644110 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:01:56.644125 | orchestrator | 2026-03-28 01:01:56.644147 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-28 01:01:56.644162 | orchestrator | Saturday 28 March 2026 01:00:04 +0000 (0:00:00.420) 0:00:11.661 ******** 2026-03-28 01:01:56.644178 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:01:56.644199 | orchestrator | 2026-03-28 01:01:56.644214 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-28 01:01:56.644228 | orchestrator | Saturday 28 March 2026 01:00:04 +0000 (0:00:00.139) 0:00:11.800 ******** 2026-03-28 01:01:56.644243 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:01:56.644257 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:01:56.644272 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:01:56.644286 | orchestrator | 2026-03-28 01:01:56.644300 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-28 01:01:56.644315 | orchestrator | Saturday 28 March 2026 01:00:05 +0000 (0:00:00.373) 0:00:12.174 ******** 2026-03-28 01:01:56.644329 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:01:56.644345 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:01:56.644359 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:01:56.644374 | orchestrator | 2026-03-28 01:01:56.644389 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-28 01:01:56.644404 | orchestrator | Saturday 28 March 2026 01:00:05 +0000 (0:00:00.723) 0:00:12.897 ******** 2026-03-28 01:01:56.644418 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:01:56.644433 | orchestrator | 2026-03-28 01:01:56.644449 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-28 01:01:56.644464 | orchestrator | Saturday 28 March 2026 01:00:05 +0000 (0:00:00.150) 0:00:13.048 ******** 2026-03-28 01:01:56.644479 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:01:56.644494 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:01:56.644508 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:01:56.644523 | orchestrator | 2026-03-28 01:01:56.644538 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-28 01:01:56.644553 | orchestrator | Saturday 28 March 2026 01:00:06 +0000 (0:00:00.360) 0:00:13.409 ******** 2026-03-28 01:01:56.644568 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:01:56.644584 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:01:56.644599 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:01:56.644627 | orchestrator | 2026-03-28 01:01:56.644642 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-28 01:01:56.644656 | orchestrator | Saturday 28 March 2026 01:00:06 +0000 (0:00:00.605) 0:00:14.014 ******** 2026-03-28 01:01:56.644673 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:01:56.644689 | orchestrator | 2026-03-28 01:01:56.644705 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-28 01:01:56.644722 | orchestrator | Saturday 28 March 2026 01:00:07 +0000 (0:00:00.181) 0:00:14.196 ******** 2026-03-28 01:01:56.644738 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:01:56.644754 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:01:56.644771 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:01:56.644787 | orchestrator | 2026-03-28 01:01:56.644804 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-28 01:01:56.644820 | orchestrator | Saturday 28 March 2026 01:00:07 +0000 (0:00:00.351) 0:00:14.547 ******** 2026-03-28 01:01:56.644836 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:01:56.644852 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:01:56.644869 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:01:56.644885 | orchestrator | 2026-03-28 01:01:56.644902 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-28 01:01:56.644916 | orchestrator | Saturday 28 March 2026 01:00:07 +0000 (0:00:00.561) 0:00:15.109 ******** 2026-03-28 01:01:56.644932 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:01:56.644950 | orchestrator | 2026-03-28 01:01:56.644964 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-28 01:01:56.644979 | orchestrator | Saturday 28 March 2026 01:00:08 +0000 (0:00:00.140) 0:00:15.249 ******** 2026-03-28 01:01:56.644994 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:01:56.645010 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:01:56.645026 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:01:56.645040 | orchestrator | 2026-03-28 01:01:56.645088 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-03-28 01:01:56.645105 | orchestrator | Saturday 28 March 2026 01:00:08 +0000 (0:00:00.306) 0:00:15.555 ******** 2026-03-28 01:01:56.645121 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:01:56.645137 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:01:56.645153 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:01:56.645170 | orchestrator | 2026-03-28 01:01:56.645188 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-03-28 01:01:56.645205 | orchestrator | Saturday 28 March 2026 01:00:10 +0000 (0:00:02.159) 0:00:17.715 ******** 2026-03-28 01:01:56.645221 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-28 01:01:56.645237 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-28 01:01:56.645247 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-28 01:01:56.645256 | orchestrator | 2026-03-28 01:01:56.645266 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-03-28 01:01:56.645275 | orchestrator | Saturday 28 March 2026 01:00:13 +0000 (0:00:03.316) 0:00:21.032 ******** 2026-03-28 01:01:56.645285 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-28 01:01:56.645296 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-28 01:01:56.645319 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-28 01:01:56.645329 | orchestrator | 2026-03-28 01:01:56.645339 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-03-28 01:01:56.645349 | orchestrator | Saturday 28 March 2026 01:00:16 +0000 (0:00:02.675) 0:00:23.707 ******** 2026-03-28 01:01:56.645366 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-28 01:01:56.645387 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-28 01:01:56.645397 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-28 01:01:56.645407 | orchestrator | 2026-03-28 01:01:56.645417 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-03-28 01:01:56.645426 | orchestrator | Saturday 28 March 2026 01:00:18 +0000 (0:00:01.684) 0:00:25.392 ******** 2026-03-28 01:01:56.645436 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:01:56.645445 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:01:56.645455 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:01:56.645465 | orchestrator | 2026-03-28 01:01:56.645475 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-03-28 01:01:56.645484 | orchestrator | Saturday 28 March 2026 01:00:18 +0000 (0:00:00.325) 0:00:25.718 ******** 2026-03-28 01:01:56.645494 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:01:56.645504 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:01:56.645513 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:01:56.645523 | orchestrator | 2026-03-28 01:01:56.645533 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-28 01:01:56.645543 | orchestrator | Saturday 28 March 2026 01:00:18 +0000 (0:00:00.278) 0:00:25.997 ******** 2026-03-28 01:01:56.645552 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:01:56.645562 | orchestrator | 2026-03-28 01:01:56.645571 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-03-28 01:01:56.645581 | orchestrator | Saturday 28 March 2026 01:00:19 +0000 (0:00:00.861) 0:00:26.858 ******** 2026-03-28 01:01:56.645595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-28 01:01:56.645630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-28 01:01:56.645656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-28 01:01:56.645674 | orchestrator | 2026-03-28 01:01:56.645684 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-03-28 01:01:56.645694 | orchestrator | Saturday 28 March 2026 01:00:21 +0000 (0:00:01.516) 0:00:28.374 ******** 2026-03-28 01:01:56.645709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-28 01:01:56.645720 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:01:56.646003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-28 01:01:56.646107 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:01:56.646125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-28 01:01:56.646136 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:01:56.646146 | orchestrator | 2026-03-28 01:01:56.646155 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-03-28 01:01:56.646165 | orchestrator | Saturday 28 March 2026 01:00:22 +0000 (0:00:00.947) 0:00:29.322 ******** 2026-03-28 01:01:56.646191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-28 01:01:56.646209 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:01:56.646221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-28 01:01:56.646238 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:01:56.646262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-28 01:01:56.646273 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:01:56.646283 | orchestrator | 2026-03-28 01:01:56.646293 | orchestrator | TASK [service-check-containers : horizon | Check containers] ******************* 2026-03-28 01:01:56.646302 | orchestrator | Saturday 28 March 2026 01:00:23 +0000 (0:00:01.155) 0:00:30.477 ******** 2026-03-28 01:01:56.646313 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-28 01:01:56.646343 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-28 01:01:56.646361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-28 01:01:56.646378 | orchestrator | 2026-03-28 01:01:56.646389 | orchestrator | TASK [service-check-containers : horizon | Notify handlers to restart containers] *** 2026-03-28 01:01:56.646403 | orchestrator | Saturday 28 March 2026 01:00:24 +0000 (0:00:01.453) 0:00:31.930 ******** 2026-03-28 01:01:56.646413 | orchestrator | changed: [testbed-node-0] => { 2026-03-28 01:01:56.646422 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 01:01:56.646432 | orchestrator | } 2026-03-28 01:01:56.646442 | orchestrator | changed: [testbed-node-1] => { 2026-03-28 01:01:56.646451 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 01:01:56.646461 | orchestrator | } 2026-03-28 01:01:56.646471 | orchestrator | changed: [testbed-node-2] => { 2026-03-28 01:01:56.646480 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 01:01:56.646489 | orchestrator | } 2026-03-28 01:01:56.646499 | orchestrator | 2026-03-28 01:01:56.646508 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-28 01:01:56.646518 | orchestrator | Saturday 28 March 2026 01:00:25 +0000 (0:00:00.347) 0:00:32.277 ******** 2026-03-28 01:01:56.646529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-28 01:01:56.646545 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:01:56.646572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-28 01:01:56.646583 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:01:56.646594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-28 01:01:56.646612 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:01:56.646625 | orchestrator | 2026-03-28 01:01:56.646636 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-28 01:01:56.646647 | orchestrator | Saturday 28 March 2026 01:00:26 +0000 (0:00:01.542) 0:00:33.820 ******** 2026-03-28 01:01:56.646659 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:01:56.646670 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:01:56.646683 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:01:56.646694 | orchestrator | 2026-03-28 01:01:56.646706 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-28 01:01:56.646717 | orchestrator | Saturday 28 March 2026 01:00:26 +0000 (0:00:00.276) 0:00:34.097 ******** 2026-03-28 01:01:56.646728 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:01:56.646739 | orchestrator | 2026-03-28 01:01:56.646755 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-03-28 01:01:56.646767 | orchestrator | Saturday 28 March 2026 01:00:27 +0000 (0:00:00.767) 0:00:34.864 ******** 2026-03-28 01:01:56.646778 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:01:56.646789 | orchestrator | 2026-03-28 01:01:56.646801 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-03-28 01:01:56.646812 | orchestrator | Saturday 28 March 2026 01:00:30 +0000 (0:00:02.489) 0:00:37.354 ******** 2026-03-28 01:01:56.646823 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:01:56.646834 | orchestrator | 2026-03-28 01:01:56.646850 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-03-28 01:01:56.646861 | orchestrator | Saturday 28 March 2026 01:00:32 +0000 (0:00:02.582) 0:00:39.936 ******** 2026-03-28 01:01:56.646873 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:01:56.646884 | orchestrator | 2026-03-28 01:01:56.646896 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-28 01:01:56.646907 | orchestrator | Saturday 28 March 2026 01:00:50 +0000 (0:00:18.120) 0:00:58.056 ******** 2026-03-28 01:01:56.646916 | orchestrator | 2026-03-28 01:01:56.646926 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-28 01:01:56.646935 | orchestrator | Saturday 28 March 2026 01:00:51 +0000 (0:00:00.111) 0:00:58.168 ******** 2026-03-28 01:01:56.646945 | orchestrator | 2026-03-28 01:01:56.646954 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-28 01:01:56.646964 | orchestrator | Saturday 28 March 2026 01:00:51 +0000 (0:00:00.078) 0:00:58.247 ******** 2026-03-28 01:01:56.646973 | orchestrator | 2026-03-28 01:01:56.646983 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-03-28 01:01:56.646992 | orchestrator | Saturday 28 March 2026 01:00:51 +0000 (0:00:00.068) 0:00:58.315 ******** 2026-03-28 01:01:56.647002 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:01:56.647011 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:01:56.647021 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:01:56.647031 | orchestrator | 2026-03-28 01:01:56.647040 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:01:56.647079 | orchestrator | testbed-node-0 : ok=38  changed=12  unreachable=0 failed=0 skipped=26  rescued=0 ignored=0 2026-03-28 01:01:56.647091 | orchestrator | testbed-node-1 : ok=35  changed=9  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-03-28 01:01:56.647101 | orchestrator | testbed-node-2 : ok=35  changed=9  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-03-28 01:01:56.647110 | orchestrator | 2026-03-28 01:01:56.647120 | orchestrator | 2026-03-28 01:01:56.647129 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:01:56.647139 | orchestrator | Saturday 28 March 2026 01:01:54 +0000 (0:01:03.670) 0:02:01.986 ******** 2026-03-28 01:01:56.647148 | orchestrator | =============================================================================== 2026-03-28 01:01:56.647158 | orchestrator | horizon : Restart horizon container ------------------------------------ 63.67s 2026-03-28 01:01:56.647168 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 18.12s 2026-03-28 01:01:56.647177 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 3.32s 2026-03-28 01:01:56.647187 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.68s 2026-03-28 01:01:56.647196 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.58s 2026-03-28 01:01:56.647220 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.49s 2026-03-28 01:01:56.647241 | orchestrator | horizon : Copying over config.json files for services ------------------- 2.16s 2026-03-28 01:01:56.647250 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.78s 2026-03-28 01:01:56.647260 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.68s 2026-03-28 01:01:56.647270 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.54s 2026-03-28 01:01:56.647279 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.52s 2026-03-28 01:01:56.647289 | orchestrator | service-check-containers : horizon | Check containers ------------------- 1.45s 2026-03-28 01:01:56.647299 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.16s 2026-03-28 01:01:56.647308 | orchestrator | horizon : include_tasks ------------------------------------------------- 1.01s 2026-03-28 01:01:56.647318 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.95s 2026-03-28 01:01:56.647327 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.88s 2026-03-28 01:01:56.647337 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.86s 2026-03-28 01:01:56.647347 | orchestrator | horizon : Update policy file name --------------------------------------- 0.83s 2026-03-28 01:01:56.647357 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.77s 2026-03-28 01:01:56.647366 | orchestrator | horizon : Update policy file name --------------------------------------- 0.72s 2026-03-28 01:01:56.647376 | orchestrator | 2026-03-28 01:01:56 | INFO  | Task 0e983d62-7caa-40e4-b401-090a83c48638 is in state STARTED 2026-03-28 01:01:56.647386 | orchestrator | 2026-03-28 01:01:56 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:01:59.680700 | orchestrator | 2026-03-28 01:01:59 | INFO  | Task 6d887c72-5d25-4b72-9537-81625c762a32 is in state STARTED 2026-03-28 01:01:59.683230 | orchestrator | 2026-03-28 01:01:59 | INFO  | Task 4fe688db-fb3f-439f-a5cd-1cda7ff0064e is in state STARTED 2026-03-28 01:01:59.684748 | orchestrator | 2026-03-28 01:01:59 | INFO  | Task 0e983d62-7caa-40e4-b401-090a83c48638 is in state STARTED 2026-03-28 01:01:59.685817 | orchestrator | 2026-03-28 01:01:59 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:02:02.730789 | orchestrator | 2026-03-28 01:02:02 | INFO  | Task 6d887c72-5d25-4b72-9537-81625c762a32 is in state STARTED 2026-03-28 01:02:02.732116 | orchestrator | 2026-03-28 01:02:02 | INFO  | Task 4fe688db-fb3f-439f-a5cd-1cda7ff0064e is in state STARTED 2026-03-28 01:02:02.734924 | orchestrator | 2026-03-28 01:02:02 | INFO  | Task 0e983d62-7caa-40e4-b401-090a83c48638 is in state STARTED 2026-03-28 01:02:02.734967 | orchestrator | 2026-03-28 01:02:02 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:02:05.782583 | orchestrator | 2026-03-28 01:02:05 | INFO  | Task 6d887c72-5d25-4b72-9537-81625c762a32 is in state STARTED 2026-03-28 01:02:05.785183 | orchestrator | 2026-03-28 01:02:05 | INFO  | Task 4fe688db-fb3f-439f-a5cd-1cda7ff0064e is in state STARTED 2026-03-28 01:02:05.788904 | orchestrator | 2026-03-28 01:02:05 | INFO  | Task 0e983d62-7caa-40e4-b401-090a83c48638 is in state STARTED 2026-03-28 01:02:05.788986 | orchestrator | 2026-03-28 01:02:05 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:02:08.825617 | orchestrator | 2026-03-28 01:02:08 | INFO  | Task 6d887c72-5d25-4b72-9537-81625c762a32 is in state STARTED 2026-03-28 01:02:08.825842 | orchestrator | 2026-03-28 01:02:08 | INFO  | Task 4fe688db-fb3f-439f-a5cd-1cda7ff0064e is in state STARTED 2026-03-28 01:02:08.827250 | orchestrator | 2026-03-28 01:02:08 | INFO  | Task 0e983d62-7caa-40e4-b401-090a83c48638 is in state STARTED 2026-03-28 01:02:08.827298 | orchestrator | 2026-03-28 01:02:08 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:02:11.881130 | orchestrator | 2026-03-28 01:02:11 | INFO  | Task 6d887c72-5d25-4b72-9537-81625c762a32 is in state STARTED 2026-03-28 01:02:11.884289 | orchestrator | 2026-03-28 01:02:11 | INFO  | Task 4fe688db-fb3f-439f-a5cd-1cda7ff0064e is in state STARTED 2026-03-28 01:02:11.885526 | orchestrator | 2026-03-28 01:02:11 | INFO  | Task 0e983d62-7caa-40e4-b401-090a83c48638 is in state STARTED 2026-03-28 01:02:11.885565 | orchestrator | 2026-03-28 01:02:11 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:02:14.929706 | orchestrator | 2026-03-28 01:02:14 | INFO  | Task 6d887c72-5d25-4b72-9537-81625c762a32 is in state STARTED 2026-03-28 01:02:14.934121 | orchestrator | 2026-03-28 01:02:14 | INFO  | Task 4fe688db-fb3f-439f-a5cd-1cda7ff0064e is in state STARTED 2026-03-28 01:02:14.936416 | orchestrator | 2026-03-28 01:02:14 | INFO  | Task 0e983d62-7caa-40e4-b401-090a83c48638 is in state STARTED 2026-03-28 01:02:14.936608 | orchestrator | 2026-03-28 01:02:14 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:02:17.999383 | orchestrator | 2026-03-28 01:02:17 | INFO  | Task 6d887c72-5d25-4b72-9537-81625c762a32 is in state STARTED 2026-03-28 01:02:18.000661 | orchestrator | 2026-03-28 01:02:18 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:02:18.003178 | orchestrator | 2026-03-28 01:02:18 | INFO  | Task 4fe688db-fb3f-439f-a5cd-1cda7ff0064e is in state STARTED 2026-03-28 01:02:18.005420 | orchestrator | 2026-03-28 01:02:18 | INFO  | Task 1ee051b0-f3a6-4a7f-975f-c3bde3b846d6 is in state STARTED 2026-03-28 01:02:18.007322 | orchestrator | 2026-03-28 01:02:18 | INFO  | Task 0e983d62-7caa-40e4-b401-090a83c48638 is in state SUCCESS 2026-03-28 01:02:18.008159 | orchestrator | 2026-03-28 01:02:18 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:02:21.051298 | orchestrator | 2026-03-28 01:02:21 | INFO  | Task 6d887c72-5d25-4b72-9537-81625c762a32 is in state STARTED 2026-03-28 01:02:21.052782 | orchestrator | 2026-03-28 01:02:21 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:02:21.054179 | orchestrator | 2026-03-28 01:02:21 | INFO  | Task 4fe688db-fb3f-439f-a5cd-1cda7ff0064e is in state STARTED 2026-03-28 01:02:21.054854 | orchestrator | 2026-03-28 01:02:21 | INFO  | Task 1ee051b0-f3a6-4a7f-975f-c3bde3b846d6 is in state STARTED 2026-03-28 01:02:21.054890 | orchestrator | 2026-03-28 01:02:21 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:02:24.103387 | orchestrator | 2026-03-28 01:02:24 | INFO  | Task 6d887c72-5d25-4b72-9537-81625c762a32 is in state STARTED 2026-03-28 01:02:24.104084 | orchestrator | 2026-03-28 01:02:24 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:02:24.105928 | orchestrator | 2026-03-28 01:02:24 | INFO  | Task 4fe688db-fb3f-439f-a5cd-1cda7ff0064e is in state STARTED 2026-03-28 01:02:24.110183 | orchestrator | 2026-03-28 01:02:24 | INFO  | Task 1ee051b0-f3a6-4a7f-975f-c3bde3b846d6 is in state STARTED 2026-03-28 01:02:24.110410 | orchestrator | 2026-03-28 01:02:24 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:02:27.175413 | orchestrator | 2026-03-28 01:02:27 | INFO  | Task 6d887c72-5d25-4b72-9537-81625c762a32 is in state STARTED 2026-03-28 01:02:27.179589 | orchestrator | 2026-03-28 01:02:27 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:02:27.182099 | orchestrator | 2026-03-28 01:02:27 | INFO  | Task 4fe688db-fb3f-439f-a5cd-1cda7ff0064e is in state STARTED 2026-03-28 01:02:27.185811 | orchestrator | 2026-03-28 01:02:27 | INFO  | Task 1ee051b0-f3a6-4a7f-975f-c3bde3b846d6 is in state STARTED 2026-03-28 01:02:27.185852 | orchestrator | 2026-03-28 01:02:27 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:02:30.285101 | orchestrator | 2026-03-28 01:02:30 | INFO  | Task 6d887c72-5d25-4b72-9537-81625c762a32 is in state STARTED 2026-03-28 01:02:30.289968 | orchestrator | 2026-03-28 01:02:30 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:02:30.301444 | orchestrator | 2026-03-28 01:02:30 | INFO  | Task 4fe688db-fb3f-439f-a5cd-1cda7ff0064e is in state STARTED 2026-03-28 01:02:30.305185 | orchestrator | 2026-03-28 01:02:30 | INFO  | Task 1ee051b0-f3a6-4a7f-975f-c3bde3b846d6 is in state STARTED 2026-03-28 01:02:30.305271 | orchestrator | 2026-03-28 01:02:30 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:02:33.484464 | orchestrator | 2026-03-28 01:02:33 | INFO  | Task 6d887c72-5d25-4b72-9537-81625c762a32 is in state STARTED 2026-03-28 01:02:33.484563 | orchestrator | 2026-03-28 01:02:33 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:02:33.484582 | orchestrator | 2026-03-28 01:02:33 | INFO  | Task 4fe688db-fb3f-439f-a5cd-1cda7ff0064e is in state STARTED 2026-03-28 01:02:33.484596 | orchestrator | 2026-03-28 01:02:33 | INFO  | Task 1ee051b0-f3a6-4a7f-975f-c3bde3b846d6 is in state STARTED 2026-03-28 01:02:33.484608 | orchestrator | 2026-03-28 01:02:33 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:02:36.490228 | orchestrator | 2026-03-28 01:02:36 | INFO  | Task 6d887c72-5d25-4b72-9537-81625c762a32 is in state STARTED 2026-03-28 01:02:36.494354 | orchestrator | 2026-03-28 01:02:36 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:02:36.497827 | orchestrator | 2026-03-28 01:02:36 | INFO  | Task 4fe688db-fb3f-439f-a5cd-1cda7ff0064e is in state STARTED 2026-03-28 01:02:36.501393 | orchestrator | 2026-03-28 01:02:36 | INFO  | Task 1ee051b0-f3a6-4a7f-975f-c3bde3b846d6 is in state STARTED 2026-03-28 01:02:36.501752 | orchestrator | 2026-03-28 01:02:36 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:02:39.556613 | orchestrator | 2026-03-28 01:02:39 | INFO  | Task 6d887c72-5d25-4b72-9537-81625c762a32 is in state STARTED 2026-03-28 01:02:39.557990 | orchestrator | 2026-03-28 01:02:39 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:02:39.558273 | orchestrator | 2026-03-28 01:02:39 | INFO  | Task 4fe688db-fb3f-439f-a5cd-1cda7ff0064e is in state STARTED 2026-03-28 01:02:39.559438 | orchestrator | 2026-03-28 01:02:39 | INFO  | Task 1ee051b0-f3a6-4a7f-975f-c3bde3b846d6 is in state STARTED 2026-03-28 01:02:39.559863 | orchestrator | 2026-03-28 01:02:39 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:02:42.602332 | orchestrator | 2026-03-28 01:02:42 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:02:42.603680 | orchestrator | 2026-03-28 01:02:42 | INFO  | Task 6d887c72-5d25-4b72-9537-81625c762a32 is in state SUCCESS 2026-03-28 01:02:42.605735 | orchestrator | 2026-03-28 01:02:42.605815 | orchestrator | 2026-03-28 01:02:42.605830 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 01:02:42.605843 | orchestrator | 2026-03-28 01:02:42.605854 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 01:02:42.605865 | orchestrator | Saturday 28 March 2026 01:00:48 +0000 (0:00:00.230) 0:00:00.230 ******** 2026-03-28 01:02:42.605876 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:02:42.605888 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:02:42.605898 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:02:42.605909 | orchestrator | 2026-03-28 01:02:42.605920 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 01:02:42.605931 | orchestrator | Saturday 28 March 2026 01:00:49 +0000 (0:00:00.416) 0:00:00.646 ******** 2026-03-28 01:02:42.605941 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-03-28 01:02:42.605953 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-03-28 01:02:42.605963 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-03-28 01:02:42.605974 | orchestrator | 2026-03-28 01:02:42.605984 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2026-03-28 01:02:42.606669 | orchestrator | 2026-03-28 01:02:42.606687 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2026-03-28 01:02:42.606698 | orchestrator | Saturday 28 March 2026 01:00:50 +0000 (0:00:00.721) 0:00:01.367 ******** 2026-03-28 01:02:42.606709 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:02:42.606720 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:02:42.606747 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:02:42.606758 | orchestrator | 2026-03-28 01:02:42.606769 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:02:42.606781 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 01:02:42.606794 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 01:02:42.606805 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 01:02:42.606816 | orchestrator | 2026-03-28 01:02:42.606827 | orchestrator | 2026-03-28 01:02:42.606838 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:02:42.606849 | orchestrator | Saturday 28 March 2026 01:02:14 +0000 (0:01:24.496) 0:01:25.864 ******** 2026-03-28 01:02:42.606860 | orchestrator | =============================================================================== 2026-03-28 01:02:42.606871 | orchestrator | Waiting for Keystone public port to be UP ------------------------------ 84.50s 2026-03-28 01:02:42.606882 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.72s 2026-03-28 01:02:42.606893 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.42s 2026-03-28 01:02:42.606903 | orchestrator | 2026-03-28 01:02:42.606914 | orchestrator | 2026-03-28 01:02:42.606925 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 01:02:42.606957 | orchestrator | 2026-03-28 01:02:42.606969 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 01:02:42.606980 | orchestrator | Saturday 28 March 2026 00:59:53 +0000 (0:00:00.345) 0:00:00.345 ******** 2026-03-28 01:02:42.607016 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:02:42.607028 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:02:42.607039 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:02:42.607050 | orchestrator | 2026-03-28 01:02:42.607061 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 01:02:42.607071 | orchestrator | Saturday 28 March 2026 00:59:53 +0000 (0:00:00.359) 0:00:00.705 ******** 2026-03-28 01:02:42.607082 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-03-28 01:02:42.607093 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-03-28 01:02:42.607103 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-03-28 01:02:42.607114 | orchestrator | 2026-03-28 01:02:42.607125 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-03-28 01:02:42.607136 | orchestrator | 2026-03-28 01:02:42.607146 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-28 01:02:42.607157 | orchestrator | Saturday 28 March 2026 00:59:53 +0000 (0:00:00.392) 0:00:01.097 ******** 2026-03-28 01:02:42.607168 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:02:42.607178 | orchestrator | 2026-03-28 01:02:42.607189 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-03-28 01:02:42.607200 | orchestrator | Saturday 28 March 2026 00:59:54 +0000 (0:00:00.903) 0:00:02.001 ******** 2026-03-28 01:02:42.607262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-28 01:02:42.607288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-28 01:02:42.607305 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-28 01:02:42.607330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-28 01:02:42.607346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-28 01:02:42.607389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-28 01:02:42.607405 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-28 01:02:42.607424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-28 01:02:42.607446 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-28 01:02:42.607460 | orchestrator | 2026-03-28 01:02:42.607474 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-03-28 01:02:42.607487 | orchestrator | Saturday 28 March 2026 00:59:57 +0000 (0:00:02.644) 0:00:04.645 ******** 2026-03-28 01:02:42.607500 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:02:42.607512 | orchestrator | 2026-03-28 01:02:42.607525 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-03-28 01:02:42.607537 | orchestrator | Saturday 28 March 2026 00:59:57 +0000 (0:00:00.162) 0:00:04.807 ******** 2026-03-28 01:02:42.607550 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:02:42.607563 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:02:42.607576 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:02:42.607588 | orchestrator | 2026-03-28 01:02:42.607601 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-03-28 01:02:42.607613 | orchestrator | Saturday 28 March 2026 00:59:57 +0000 (0:00:00.316) 0:00:05.123 ******** 2026-03-28 01:02:42.607625 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 01:02:42.607637 | orchestrator | 2026-03-28 01:02:42.607650 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-28 01:02:42.607661 | orchestrator | Saturday 28 March 2026 00:59:58 +0000 (0:00:01.041) 0:00:06.165 ******** 2026-03-28 01:02:42.607672 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:02:42.607684 | orchestrator | 2026-03-28 01:02:42.607695 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-03-28 01:02:42.607706 | orchestrator | Saturday 28 March 2026 00:59:59 +0000 (0:00:00.960) 0:00:07.126 ******** 2026-03-28 01:02:42.607743 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-28 01:02:42.607763 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-28 01:02:42.607784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-28 01:02:42.607796 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-28 01:02:42.607808 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-28 01:02:42.607819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-28 01:02:42.607857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-28 01:02:42.607882 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-28 01:02:42.607894 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-28 01:02:42.607906 | orchestrator | 2026-03-28 01:02:42.607917 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-03-28 01:02:42.607928 | orchestrator | Saturday 28 March 2026 01:00:03 +0000 (0:00:04.165) 0:00:11.292 ******** 2026-03-28 01:02:42.607940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-03-28 01:02:42.607953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 01:02:42.607971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-28 01:02:42.607983 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:02:42.608020 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-03-28 01:02:42.608041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 01:02:42.608053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-28 01:02:42.608064 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:02:42.608077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-03-28 01:02:42.608101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 01:02:42.608143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-28 01:02:42.608163 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:02:42.608180 | orchestrator | 2026-03-28 01:02:42.608198 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-03-28 01:02:42.608216 | orchestrator | Saturday 28 March 2026 01:00:04 +0000 (0:00:00.898) 0:00:12.190 ******** 2026-03-28 01:02:42.608243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-03-28 01:02:42.608264 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 01:02:42.608284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-28 01:02:42.608305 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:02:42.608336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-03-28 01:02:42.608495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-03-28 01:02:42.608514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 01:02:42.608526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 01:02:42.608538 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-28 01:02:42.608550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-28 01:02:42.608583 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:02:42.608595 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:02:42.608606 | orchestrator | 2026-03-28 01:02:42.608617 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-03-28 01:02:42.608629 | orchestrator | Saturday 28 March 2026 01:00:06 +0000 (0:00:01.335) 0:00:13.526 ******** 2026-03-28 01:02:42.608651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-28 01:02:42.608669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-28 01:02:42.608683 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-28 01:02:42.608695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-28 01:02:42.608721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-28 01:02:42.608734 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-28 01:02:42.608750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-28 01:02:42.608762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-28 01:02:42.608774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-28 01:02:42.608785 | orchestrator | 2026-03-28 01:02:42.608796 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-03-28 01:02:42.608807 | orchestrator | Saturday 28 March 2026 01:00:09 +0000 (0:00:03.768) 0:00:17.294 ******** 2026-03-28 01:02:42.608818 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-28 01:02:42.608845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 01:02:42.608862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-28 01:02:42.608875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 01:02:42.608887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-28 01:02:42.608906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 01:02:42.608924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-28 01:02:42.608936 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-28 01:02:42.608952 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-28 01:02:42.608963 | orchestrator | 2026-03-28 01:02:42.608974 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-03-28 01:02:42.608985 | orchestrator | Saturday 28 March 2026 01:00:17 +0000 (0:00:07.253) 0:00:24.548 ******** 2026-03-28 01:02:42.609051 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:02:42.609062 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:02:42.609073 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:02:42.609084 | orchestrator | 2026-03-28 01:02:42.609095 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-03-28 01:02:42.609106 | orchestrator | Saturday 28 March 2026 01:00:18 +0000 (0:00:01.566) 0:00:26.115 ******** 2026-03-28 01:02:42.609117 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:02:42.609128 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:02:42.609139 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:02:42.609150 | orchestrator | 2026-03-28 01:02:42.609161 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-03-28 01:02:42.609172 | orchestrator | Saturday 28 March 2026 01:00:19 +0000 (0:00:01.001) 0:00:27.117 ******** 2026-03-28 01:02:42.609184 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:02:42.609195 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:02:42.609207 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:02:42.609218 | orchestrator | 2026-03-28 01:02:42.609229 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-03-28 01:02:42.609240 | orchestrator | Saturday 28 March 2026 01:00:20 +0000 (0:00:00.340) 0:00:27.458 ******** 2026-03-28 01:02:42.609258 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:02:42.609269 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:02:42.609280 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:02:42.609290 | orchestrator | 2026-03-28 01:02:42.609302 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-03-28 01:02:42.609313 | orchestrator | Saturday 28 March 2026 01:00:20 +0000 (0:00:00.285) 0:00:27.744 ******** 2026-03-28 01:02:42.609324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-03-28 01:02:42.609346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 01:02:42.609358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-28 01:02:42.609370 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:02:42.609388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-03-28 01:02:42.609421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 01:02:42.609454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-28 01:02:42.609474 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:02:42.609504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-03-28 01:02:42.609523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 01:02:42.609556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-28 01:02:42.609575 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:02:42.609592 | orchestrator | 2026-03-28 01:02:42.609609 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-28 01:02:42.609625 | orchestrator | Saturday 28 March 2026 01:00:21 +0000 (0:00:00.629) 0:00:28.373 ******** 2026-03-28 01:02:42.609642 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:02:42.609670 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:02:42.609688 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:02:42.609704 | orchestrator | 2026-03-28 01:02:42.609721 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-03-28 01:02:42.609738 | orchestrator | Saturday 28 March 2026 01:00:21 +0000 (0:00:00.499) 0:00:28.873 ******** 2026-03-28 01:02:42.609756 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-28 01:02:42.609774 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-28 01:02:42.609793 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-28 01:02:42.609809 | orchestrator | 2026-03-28 01:02:42.609827 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-03-28 01:02:42.609843 | orchestrator | Saturday 28 March 2026 01:00:23 +0000 (0:00:01.767) 0:00:30.641 ******** 2026-03-28 01:02:42.609860 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 01:02:42.609879 | orchestrator | 2026-03-28 01:02:42.609897 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-03-28 01:02:42.609915 | orchestrator | Saturday 28 March 2026 01:00:24 +0000 (0:00:01.117) 0:00:31.758 ******** 2026-03-28 01:02:42.609935 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:02:42.609952 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:02:42.609970 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:02:42.610145 | orchestrator | 2026-03-28 01:02:42.610182 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-03-28 01:02:42.610200 | orchestrator | Saturday 28 March 2026 01:00:25 +0000 (0:00:00.598) 0:00:32.356 ******** 2026-03-28 01:02:42.610219 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 01:02:42.610239 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-28 01:02:42.610257 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-28 01:02:42.610275 | orchestrator | 2026-03-28 01:02:42.610293 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-03-28 01:02:42.610311 | orchestrator | Saturday 28 March 2026 01:00:26 +0000 (0:00:01.506) 0:00:33.863 ******** 2026-03-28 01:02:42.610330 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:02:42.610350 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:02:42.610362 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:02:42.610373 | orchestrator | 2026-03-28 01:02:42.610385 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-03-28 01:02:42.610396 | orchestrator | Saturday 28 March 2026 01:00:27 +0000 (0:00:00.524) 0:00:34.387 ******** 2026-03-28 01:02:42.610407 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-28 01:02:42.610418 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-28 01:02:42.610429 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-28 01:02:42.610439 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-28 01:02:42.610450 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-28 01:02:42.610461 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-28 01:02:42.610490 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-28 01:02:42.610502 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-28 01:02:42.610513 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-28 01:02:42.610524 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-28 01:02:42.610535 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-28 01:02:42.610563 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-28 01:02:42.610574 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-28 01:02:42.610585 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-28 01:02:42.610595 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-28 01:02:42.610605 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-28 01:02:42.610623 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-28 01:02:42.610633 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-28 01:02:42.610643 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-28 01:02:42.610653 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-28 01:02:42.610662 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-28 01:02:42.610672 | orchestrator | 2026-03-28 01:02:42.610682 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-03-28 01:02:42.610691 | orchestrator | Saturday 28 March 2026 01:00:36 +0000 (0:00:09.311) 0:00:43.699 ******** 2026-03-28 01:02:42.610701 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-28 01:02:42.610710 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-28 01:02:42.610720 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-28 01:02:42.610730 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-28 01:02:42.610739 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-28 01:02:42.610749 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-28 01:02:42.610759 | orchestrator | 2026-03-28 01:02:42.610768 | orchestrator | TASK [service-check-containers : keystone | Check containers] ****************** 2026-03-28 01:02:42.610777 | orchestrator | Saturday 28 March 2026 01:00:39 +0000 (0:00:02.772) 0:00:46.472 ******** 2026-03-28 01:02:42.610790 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-28 01:02:42.610811 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-28 01:02:42.610834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-28 01:02:42.610845 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-28 01:02:42.610857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-28 01:02:42.610868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-28 01:02:42.610878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-28 01:02:42.610903 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-28 01:02:42.610914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-28 01:02:42.610924 | orchestrator | 2026-03-28 01:02:42.610938 | orchestrator | TASK [service-check-containers : keystone | Notify handlers to restart containers] *** 2026-03-28 01:02:42.610948 | orchestrator | Saturday 28 March 2026 01:00:41 +0000 (0:00:02.622) 0:00:49.095 ******** 2026-03-28 01:02:42.610958 | orchestrator | changed: [testbed-node-0] => { 2026-03-28 01:02:42.610968 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 01:02:42.610978 | orchestrator | } 2026-03-28 01:02:42.610987 | orchestrator | changed: [testbed-node-1] => { 2026-03-28 01:02:42.611021 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 01:02:42.611031 | orchestrator | } 2026-03-28 01:02:42.611042 | orchestrator | changed: [testbed-node-2] => { 2026-03-28 01:02:42.611052 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 01:02:42.611062 | orchestrator | } 2026-03-28 01:02:42.611072 | orchestrator | 2026-03-28 01:02:42.611084 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-28 01:02:42.611100 | orchestrator | Saturday 28 March 2026 01:00:42 +0000 (0:00:00.564) 0:00:49.659 ******** 2026-03-28 01:02:42.611118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-03-28 01:02:42.611137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 01:02:42.611165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-28 01:02:42.611181 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:02:42.611210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-03-28 01:02:42.611438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 01:02:42.611459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-28 01:02:42.611470 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:02:42.611480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-03-28 01:02:42.611501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-28 01:02:42.611521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-28 01:02:42.611532 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:02:42.611542 | orchestrator | 2026-03-28 01:02:42.611551 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-28 01:02:42.611561 | orchestrator | Saturday 28 March 2026 01:00:43 +0000 (0:00:00.768) 0:00:50.427 ******** 2026-03-28 01:02:42.611571 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:02:42.611581 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:02:42.611590 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:02:42.611600 | orchestrator | 2026-03-28 01:02:42.611610 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-03-28 01:02:42.611625 | orchestrator | Saturday 28 March 2026 01:00:43 +0000 (0:00:00.411) 0:00:50.839 ******** 2026-03-28 01:02:42.611642 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:02:42.611658 | orchestrator | 2026-03-28 01:02:42.611675 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-03-28 01:02:42.611698 | orchestrator | Saturday 28 March 2026 01:00:45 +0000 (0:00:02.413) 0:00:53.252 ******** 2026-03-28 01:02:42.611715 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:02:42.611731 | orchestrator | 2026-03-28 01:02:42.611748 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-03-28 01:02:42.611767 | orchestrator | Saturday 28 March 2026 01:00:48 +0000 (0:00:02.438) 0:00:55.691 ******** 2026-03-28 01:02:42.611786 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:02:42.611804 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:02:42.611821 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:02:42.611840 | orchestrator | 2026-03-28 01:02:42.611853 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-03-28 01:02:42.611863 | orchestrator | Saturday 28 March 2026 01:00:49 +0000 (0:00:01.284) 0:00:56.975 ******** 2026-03-28 01:02:42.611873 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:02:42.611882 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:02:42.611892 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:02:42.611901 | orchestrator | 2026-03-28 01:02:42.611911 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-03-28 01:02:42.611920 | orchestrator | Saturday 28 March 2026 01:00:50 +0000 (0:00:00.754) 0:00:57.730 ******** 2026-03-28 01:02:42.611930 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:02:42.611939 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:02:42.611948 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:02:42.611967 | orchestrator | 2026-03-28 01:02:42.611977 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-03-28 01:02:42.611986 | orchestrator | Saturday 28 March 2026 01:00:51 +0000 (0:00:00.624) 0:00:58.355 ******** 2026-03-28 01:02:42.612018 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:02:42.612029 | orchestrator | 2026-03-28 01:02:42.612040 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-03-28 01:02:42.612052 | orchestrator | Saturday 28 March 2026 01:01:10 +0000 (0:00:19.260) 0:01:17.615 ******** 2026-03-28 01:02:42.612064 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:02:42.612081 | orchestrator | 2026-03-28 01:02:42.612097 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-28 01:02:42.612114 | orchestrator | Saturday 28 March 2026 01:01:23 +0000 (0:00:12.777) 0:01:30.392 ******** 2026-03-28 01:02:42.612135 | orchestrator | 2026-03-28 01:02:42.612153 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-28 01:02:42.612172 | orchestrator | Saturday 28 March 2026 01:01:23 +0000 (0:00:00.191) 0:01:30.584 ******** 2026-03-28 01:02:42.612189 | orchestrator | 2026-03-28 01:02:42.612207 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-28 01:02:42.612225 | orchestrator | Saturday 28 March 2026 01:01:23 +0000 (0:00:00.090) 0:01:30.674 ******** 2026-03-28 01:02:42.612244 | orchestrator | 2026-03-28 01:02:42.612264 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-03-28 01:02:42.612282 | orchestrator | Saturday 28 March 2026 01:01:23 +0000 (0:00:00.316) 0:01:30.991 ******** 2026-03-28 01:02:42.612294 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:02:42.612305 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:02:42.612317 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:02:42.612326 | orchestrator | 2026-03-28 01:02:42.612336 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-03-28 01:02:42.612345 | orchestrator | Saturday 28 March 2026 01:01:44 +0000 (0:00:20.353) 0:01:51.344 ******** 2026-03-28 01:02:42.612355 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:02:42.612364 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:02:42.612374 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:02:42.612383 | orchestrator | 2026-03-28 01:02:42.612393 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-03-28 01:02:42.612403 | orchestrator | Saturday 28 March 2026 01:01:54 +0000 (0:00:10.412) 0:02:01.756 ******** 2026-03-28 01:02:42.612412 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:02:42.612421 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:02:42.612431 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:02:42.612441 | orchestrator | 2026-03-28 01:02:42.612450 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-28 01:02:42.612459 | orchestrator | Saturday 28 March 2026 01:02:05 +0000 (0:00:11.475) 0:02:13.232 ******** 2026-03-28 01:02:42.612469 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:02:42.612479 | orchestrator | 2026-03-28 01:02:42.612489 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-03-28 01:02:42.612498 | orchestrator | Saturday 28 March 2026 01:02:06 +0000 (0:00:00.764) 0:02:13.997 ******** 2026-03-28 01:02:42.612508 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:02:42.612528 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:02:42.612538 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:02:42.612548 | orchestrator | 2026-03-28 01:02:42.612558 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-03-28 01:02:42.612567 | orchestrator | Saturday 28 March 2026 01:02:07 +0000 (0:00:00.761) 0:02:14.758 ******** 2026-03-28 01:02:42.612577 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:02:42.612587 | orchestrator | 2026-03-28 01:02:42.612596 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-03-28 01:02:42.612605 | orchestrator | Saturday 28 March 2026 01:02:09 +0000 (0:00:01.822) 0:02:16.580 ******** 2026-03-28 01:02:42.612624 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-03-28 01:02:42.612634 | orchestrator | 2026-03-28 01:02:42.612644 | orchestrator | TASK [service-ks-register : keystone | Creating/deleting services] ************* 2026-03-28 01:02:42.612653 | orchestrator | Saturday 28 March 2026 01:02:23 +0000 (0:00:14.016) 0:02:30.597 ******** 2026-03-28 01:02:42.612663 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-03-28 01:02:42.612672 | orchestrator | 2026-03-28 01:02:42.612682 | orchestrator | TASK [service-ks-register : keystone | Creating/deleting endpoints] ************ 2026-03-28 01:02:42.612691 | orchestrator | Saturday 28 March 2026 01:02:27 +0000 (0:00:03.838) 0:02:34.436 ******** 2026-03-28 01:02:42.612701 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-03-28 01:02:42.612717 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-03-28 01:02:42.612727 | orchestrator | 2026-03-28 01:02:42.612737 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-03-28 01:02:42.612746 | orchestrator | Saturday 28 March 2026 01:02:34 +0000 (0:00:07.451) 0:02:41.887 ******** 2026-03-28 01:02:42.612756 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:02:42.612766 | orchestrator | 2026-03-28 01:02:42.612775 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-03-28 01:02:42.612785 | orchestrator | Saturday 28 March 2026 01:02:34 +0000 (0:00:00.260) 0:02:42.148 ******** 2026-03-28 01:02:42.612794 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:02:42.612803 | orchestrator | 2026-03-28 01:02:42.612813 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-03-28 01:02:42.612823 | orchestrator | Saturday 28 March 2026 01:02:35 +0000 (0:00:00.323) 0:02:42.471 ******** 2026-03-28 01:02:42.612832 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:02:42.612842 | orchestrator | 2026-03-28 01:02:42.612851 | orchestrator | TASK [service-ks-register : keystone | Granting/revoking user roles] *********** 2026-03-28 01:02:42.612861 | orchestrator | Saturday 28 March 2026 01:02:36 +0000 (0:00:00.914) 0:02:43.385 ******** 2026-03-28 01:02:42.612870 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:02:42.613070 | orchestrator | 2026-03-28 01:02:42.613097 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-03-28 01:02:42.613114 | orchestrator | Saturday 28 March 2026 01:02:36 +0000 (0:00:00.566) 0:02:43.952 ******** 2026-03-28 01:02:42.613124 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:02:42.613134 | orchestrator | 2026-03-28 01:02:42.613144 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-28 01:02:42.613153 | orchestrator | Saturday 28 March 2026 01:02:40 +0000 (0:00:03.704) 0:02:47.657 ******** 2026-03-28 01:02:42.613163 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:02:42.613172 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:02:42.613182 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:02:42.613191 | orchestrator | 2026-03-28 01:02:42.613201 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:02:42.613211 | orchestrator | testbed-node-0 : ok=34  changed=20  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2026-03-28 01:02:42.613222 | orchestrator | testbed-node-1 : ok=23  changed=13  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-03-28 01:02:42.613232 | orchestrator | testbed-node-2 : ok=23  changed=13  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-03-28 01:02:42.613241 | orchestrator | 2026-03-28 01:02:42.613251 | orchestrator | 2026-03-28 01:02:42.613261 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:02:42.613271 | orchestrator | Saturday 28 March 2026 01:02:40 +0000 (0:00:00.467) 0:02:48.125 ******** 2026-03-28 01:02:42.613280 | orchestrator | =============================================================================== 2026-03-28 01:02:42.613398 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 20.35s 2026-03-28 01:02:42.613414 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 19.26s 2026-03-28 01:02:42.613424 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 14.02s 2026-03-28 01:02:42.613433 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 12.77s 2026-03-28 01:02:42.613443 | orchestrator | keystone : Restart keystone container ---------------------------------- 11.48s 2026-03-28 01:02:42.613452 | orchestrator | keystone : Restart keystone-fernet container --------------------------- 10.41s 2026-03-28 01:02:42.613462 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.31s 2026-03-28 01:02:42.613471 | orchestrator | service-ks-register : keystone | Creating/deleting endpoints ------------ 7.45s 2026-03-28 01:02:42.613481 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 7.25s 2026-03-28 01:02:42.613490 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 4.17s 2026-03-28 01:02:42.613500 | orchestrator | service-ks-register : keystone | Creating/deleting services ------------- 3.84s 2026-03-28 01:02:42.613520 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.77s 2026-03-28 01:02:42.613530 | orchestrator | keystone : Creating default user role ----------------------------------- 3.70s 2026-03-28 01:02:42.613540 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.77s 2026-03-28 01:02:42.613550 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 2.64s 2026-03-28 01:02:42.613559 | orchestrator | service-check-containers : keystone | Check containers ------------------ 2.62s 2026-03-28 01:02:42.613569 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.44s 2026-03-28 01:02:42.613579 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.41s 2026-03-28 01:02:42.613588 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.82s 2026-03-28 01:02:42.613598 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.77s 2026-03-28 01:02:42.613608 | orchestrator | 2026-03-28 01:02:42 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:02:42.613617 | orchestrator | 2026-03-28 01:02:42 | INFO  | Task 4fe688db-fb3f-439f-a5cd-1cda7ff0064e is in state STARTED 2026-03-28 01:02:42.613631 | orchestrator | 2026-03-28 01:02:42 | INFO  | Task 1ee051b0-f3a6-4a7f-975f-c3bde3b846d6 is in state STARTED 2026-03-28 01:02:42.613640 | orchestrator | 2026-03-28 01:02:42 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:02:45.670277 | orchestrator | 2026-03-28 01:02:45 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:02:45.683264 | orchestrator | 2026-03-28 01:02:45 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:02:45.683329 | orchestrator | 2026-03-28 01:02:45 | INFO  | Task 4fe688db-fb3f-439f-a5cd-1cda7ff0064e is in state STARTED 2026-03-28 01:02:45.683335 | orchestrator | 2026-03-28 01:02:45 | INFO  | Task 1ee051b0-f3a6-4a7f-975f-c3bde3b846d6 is in state STARTED 2026-03-28 01:02:45.683340 | orchestrator | 2026-03-28 01:02:45 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:02:48.719715 | orchestrator | 2026-03-28 01:02:48 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:02:48.721758 | orchestrator | 2026-03-28 01:02:48 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:02:48.724581 | orchestrator | 2026-03-28 01:02:48 | INFO  | Task 4fe688db-fb3f-439f-a5cd-1cda7ff0064e is in state STARTED 2026-03-28 01:02:48.726667 | orchestrator | 2026-03-28 01:02:48 | INFO  | Task 1ee051b0-f3a6-4a7f-975f-c3bde3b846d6 is in state STARTED 2026-03-28 01:02:48.727503 | orchestrator | 2026-03-28 01:02:48 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:02:51.779892 | orchestrator | 2026-03-28 01:02:51 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:02:51.782240 | orchestrator | 2026-03-28 01:02:51 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:02:51.785846 | orchestrator | 2026-03-28 01:02:51 | INFO  | Task 4fe688db-fb3f-439f-a5cd-1cda7ff0064e is in state STARTED 2026-03-28 01:02:51.788586 | orchestrator | 2026-03-28 01:02:51 | INFO  | Task 1ee051b0-f3a6-4a7f-975f-c3bde3b846d6 is in state STARTED 2026-03-28 01:02:51.788676 | orchestrator | 2026-03-28 01:02:51 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:02:54.825613 | orchestrator | 2026-03-28 01:02:54 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:02:54.828721 | orchestrator | 2026-03-28 01:02:54 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:02:54.831338 | orchestrator | 2026-03-28 01:02:54 | INFO  | Task 4fe688db-fb3f-439f-a5cd-1cda7ff0064e is in state STARTED 2026-03-28 01:02:54.833156 | orchestrator | 2026-03-28 01:02:54 | INFO  | Task 1ee051b0-f3a6-4a7f-975f-c3bde3b846d6 is in state STARTED 2026-03-28 01:02:54.833202 | orchestrator | 2026-03-28 01:02:54 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:02:57.874265 | orchestrator | 2026-03-28 01:02:57 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:02:57.874925 | orchestrator | 2026-03-28 01:02:57 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:02:57.876249 | orchestrator | 2026-03-28 01:02:57 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:02:57.877250 | orchestrator | 2026-03-28 01:02:57 | INFO  | Task 4fe688db-fb3f-439f-a5cd-1cda7ff0064e is in state STARTED 2026-03-28 01:02:57.882079 | orchestrator | 2026-03-28 01:02:57 | INFO  | Task 1ee051b0-f3a6-4a7f-975f-c3bde3b846d6 is in state SUCCESS 2026-03-28 01:02:57.882138 | orchestrator | 2026-03-28 01:02:57 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:03:00.918155 | orchestrator | 2026-03-28 01:03:00 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:03:00.920439 | orchestrator | 2026-03-28 01:03:00 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:03:00.922851 | orchestrator | 2026-03-28 01:03:00 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:03:00.924835 | orchestrator | 2026-03-28 01:03:00 | INFO  | Task 4fe688db-fb3f-439f-a5cd-1cda7ff0064e is in state STARTED 2026-03-28 01:03:00.924878 | orchestrator | 2026-03-28 01:03:00 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:03:03.955105 | orchestrator | 2026-03-28 01:03:03 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:03:03.955564 | orchestrator | 2026-03-28 01:03:03 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:03:03.956253 | orchestrator | 2026-03-28 01:03:03 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:03:03.957378 | orchestrator | 2026-03-28 01:03:03 | INFO  | Task 4fe688db-fb3f-439f-a5cd-1cda7ff0064e is in state STARTED 2026-03-28 01:03:03.957460 | orchestrator | 2026-03-28 01:03:03 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:03:06.992035 | orchestrator | 2026-03-28 01:03:06 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:03:06.992680 | orchestrator | 2026-03-28 01:03:06 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:03:06.993780 | orchestrator | 2026-03-28 01:03:06 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:03:06.995648 | orchestrator | 2026-03-28 01:03:06 | INFO  | Task 4fe688db-fb3f-439f-a5cd-1cda7ff0064e is in state STARTED 2026-03-28 01:03:06.995731 | orchestrator | 2026-03-28 01:03:06 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:03:10.044331 | orchestrator | 2026-03-28 01:03:10 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:03:10.045534 | orchestrator | 2026-03-28 01:03:10 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:03:10.049827 | orchestrator | 2026-03-28 01:03:10 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:03:10.051278 | orchestrator | 2026-03-28 01:03:10 | INFO  | Task 4fe688db-fb3f-439f-a5cd-1cda7ff0064e is in state STARTED 2026-03-28 01:03:10.051326 | orchestrator | 2026-03-28 01:03:10 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:03:13.110129 | orchestrator | 2026-03-28 01:03:13 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:03:13.111314 | orchestrator | 2026-03-28 01:03:13 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:03:13.113402 | orchestrator | 2026-03-28 01:03:13 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:03:13.115287 | orchestrator | 2026-03-28 01:03:13 | INFO  | Task 4fe688db-fb3f-439f-a5cd-1cda7ff0064e is in state STARTED 2026-03-28 01:03:13.115476 | orchestrator | 2026-03-28 01:03:13 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:03:16.171942 | orchestrator | 2026-03-28 01:03:16 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:03:16.174992 | orchestrator | 2026-03-28 01:03:16 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:03:16.178333 | orchestrator | 2026-03-28 01:03:16 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:03:16.183741 | orchestrator | 2026-03-28 01:03:16 | INFO  | Task 4fe688db-fb3f-439f-a5cd-1cda7ff0064e is in state STARTED 2026-03-28 01:03:16.184710 | orchestrator | 2026-03-28 01:03:16 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:03:19.222905 | orchestrator | 2026-03-28 01:03:19 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:03:19.223336 | orchestrator | 2026-03-28 01:03:19 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:03:19.225444 | orchestrator | 2026-03-28 01:03:19 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:03:19.226493 | orchestrator | 2026-03-28 01:03:19 | INFO  | Task 4fe688db-fb3f-439f-a5cd-1cda7ff0064e is in state STARTED 2026-03-28 01:03:19.226525 | orchestrator | 2026-03-28 01:03:19 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:03:22.266945 | orchestrator | 2026-03-28 01:03:22 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:03:22.269180 | orchestrator | 2026-03-28 01:03:22 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:03:22.270941 | orchestrator | 2026-03-28 01:03:22 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:03:22.274126 | orchestrator | 2026-03-28 01:03:22 | INFO  | Task 4fe688db-fb3f-439f-a5cd-1cda7ff0064e is in state STARTED 2026-03-28 01:03:22.274183 | orchestrator | 2026-03-28 01:03:22 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:03:25.377251 | orchestrator | 2026-03-28 01:03:25 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:03:25.377318 | orchestrator | 2026-03-28 01:03:25 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:03:25.377324 | orchestrator | 2026-03-28 01:03:25 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:03:25.378255 | orchestrator | 2026-03-28 01:03:25 | INFO  | Task 4fe688db-fb3f-439f-a5cd-1cda7ff0064e is in state STARTED 2026-03-28 01:03:25.378342 | orchestrator | 2026-03-28 01:03:25 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:03:28.433779 | orchestrator | 2026-03-28 01:03:28 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:03:28.437584 | orchestrator | 2026-03-28 01:03:28 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:03:28.440419 | orchestrator | 2026-03-28 01:03:28 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:03:28.443459 | orchestrator | 2026-03-28 01:03:28 | INFO  | Task 4fe688db-fb3f-439f-a5cd-1cda7ff0064e is in state STARTED 2026-03-28 01:03:28.444143 | orchestrator | 2026-03-28 01:03:28 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:03:31.485262 | orchestrator | 2026-03-28 01:03:31 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:03:31.485760 | orchestrator | 2026-03-28 01:03:31 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:03:31.486773 | orchestrator | 2026-03-28 01:03:31 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:03:31.487870 | orchestrator | 2026-03-28 01:03:31 | INFO  | Task 4fe688db-fb3f-439f-a5cd-1cda7ff0064e is in state STARTED 2026-03-28 01:03:31.487920 | orchestrator | 2026-03-28 01:03:31 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:03:34.519497 | orchestrator | 2026-03-28 01:03:34 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:03:34.520879 | orchestrator | 2026-03-28 01:03:34 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:03:34.521661 | orchestrator | 2026-03-28 01:03:34 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:03:34.522535 | orchestrator | 2026-03-28 01:03:34 | INFO  | Task 4fe688db-fb3f-439f-a5cd-1cda7ff0064e is in state STARTED 2026-03-28 01:03:34.522793 | orchestrator | 2026-03-28 01:03:34 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:03:37.559579 | orchestrator | 2026-03-28 01:03:37 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:03:37.560137 | orchestrator | 2026-03-28 01:03:37 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:03:37.561220 | orchestrator | 2026-03-28 01:03:37 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:03:37.562150 | orchestrator | 2026-03-28 01:03:37 | INFO  | Task 4fe688db-fb3f-439f-a5cd-1cda7ff0064e is in state STARTED 2026-03-28 01:03:37.562176 | orchestrator | 2026-03-28 01:03:37 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:03:40.592403 | orchestrator | 2026-03-28 01:03:40 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:03:40.593011 | orchestrator | 2026-03-28 01:03:40 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:03:40.593841 | orchestrator | 2026-03-28 01:03:40 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:03:40.595332 | orchestrator | 2026-03-28 01:03:40 | INFO  | Task 4fe688db-fb3f-439f-a5cd-1cda7ff0064e is in state STARTED 2026-03-28 01:03:40.595431 | orchestrator | 2026-03-28 01:03:40 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:03:43.688318 | orchestrator | 2026-03-28 01:03:43 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:03:43.688420 | orchestrator | 2026-03-28 01:03:43 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:03:43.688431 | orchestrator | 2026-03-28 01:03:43 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:03:43.691418 | orchestrator | 2026-03-28 01:03:43 | INFO  | Task 4fe688db-fb3f-439f-a5cd-1cda7ff0064e is in state STARTED 2026-03-28 01:03:43.693916 | orchestrator | 2026-03-28 01:03:43 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:03:46.731530 | orchestrator | 2026-03-28 01:03:46 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:03:46.732616 | orchestrator | 2026-03-28 01:03:46 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:03:46.733577 | orchestrator | 2026-03-28 01:03:46 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:03:46.734161 | orchestrator | 2026-03-28 01:03:46 | INFO  | Task 4fe688db-fb3f-439f-a5cd-1cda7ff0064e is in state STARTED 2026-03-28 01:03:46.734224 | orchestrator | 2026-03-28 01:03:46 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:03:49.767713 | orchestrator | 2026-03-28 01:03:49 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:03:49.768096 | orchestrator | 2026-03-28 01:03:49 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:03:49.768839 | orchestrator | 2026-03-28 01:03:49 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:03:49.769906 | orchestrator | 2026-03-28 01:03:49 | INFO  | Task 4fe688db-fb3f-439f-a5cd-1cda7ff0064e is in state STARTED 2026-03-28 01:03:49.769952 | orchestrator | 2026-03-28 01:03:49 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:03:52.799637 | orchestrator | 2026-03-28 01:03:52 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:03:52.799876 | orchestrator | 2026-03-28 01:03:52 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:03:52.802662 | orchestrator | 2026-03-28 01:03:52 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:03:52.803295 | orchestrator | 2026-03-28 01:03:52 | INFO  | Task 4fe688db-fb3f-439f-a5cd-1cda7ff0064e is in state STARTED 2026-03-28 01:03:52.803319 | orchestrator | 2026-03-28 01:03:52 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:03:55.836513 | orchestrator | 2026-03-28 01:03:55 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:03:55.836605 | orchestrator | 2026-03-28 01:03:55 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:03:55.838585 | orchestrator | 2026-03-28 01:03:55 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:03:55.840509 | orchestrator | 2026-03-28 01:03:55 | INFO  | Task 4fe688db-fb3f-439f-a5cd-1cda7ff0064e is in state STARTED 2026-03-28 01:03:55.840545 | orchestrator | 2026-03-28 01:03:55 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:03:58.875597 | orchestrator | 2026-03-28 01:03:58 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:03:58.876394 | orchestrator | 2026-03-28 01:03:58 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:03:58.879860 | orchestrator | 2026-03-28 01:03:58 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:03:58.882276 | orchestrator | 2026-03-28 01:03:58 | INFO  | Task 4fe688db-fb3f-439f-a5cd-1cda7ff0064e is in state SUCCESS 2026-03-28 01:03:58.884173 | orchestrator | 2026-03-28 01:03:58.884222 | orchestrator | 2026-03-28 01:03:58.884229 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 01:03:58.884235 | orchestrator | 2026-03-28 01:03:58.884241 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 01:03:58.884246 | orchestrator | Saturday 28 March 2026 01:02:18 +0000 (0:00:00.356) 0:00:00.356 ******** 2026-03-28 01:03:58.884252 | orchestrator | ok: [testbed-manager] 2026-03-28 01:03:58.884257 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:03:58.884263 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:03:58.884268 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:03:58.884273 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:03:58.884278 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:03:58.884283 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:03:58.884330 | orchestrator | 2026-03-28 01:03:58.884337 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 01:03:58.884342 | orchestrator | Saturday 28 March 2026 01:02:19 +0000 (0:00:00.803) 0:00:01.160 ******** 2026-03-28 01:03:58.884347 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-03-28 01:03:58.884353 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-03-28 01:03:58.884358 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-03-28 01:03:58.884363 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-03-28 01:03:58.884368 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-03-28 01:03:58.884372 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-03-28 01:03:58.884410 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-03-28 01:03:58.884418 | orchestrator | 2026-03-28 01:03:58.884465 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-28 01:03:58.884517 | orchestrator | 2026-03-28 01:03:58.884523 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-03-28 01:03:58.884528 | orchestrator | Saturday 28 March 2026 01:02:20 +0000 (0:00:00.856) 0:00:02.017 ******** 2026-03-28 01:03:58.884534 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:03:58.884568 | orchestrator | 2026-03-28 01:03:58.884573 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating/deleting services] ************* 2026-03-28 01:03:58.884581 | orchestrator | Saturday 28 March 2026 01:02:21 +0000 (0:00:01.422) 0:00:03.440 ******** 2026-03-28 01:03:58.884588 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-03-28 01:03:58.884596 | orchestrator | 2026-03-28 01:03:58.886204 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating/deleting endpoints] ************ 2026-03-28 01:03:58.886285 | orchestrator | Saturday 28 March 2026 01:02:26 +0000 (0:00:04.658) 0:00:08.098 ******** 2026-03-28 01:03:58.886303 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-03-28 01:03:58.886326 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-03-28 01:03:58.886352 | orchestrator | 2026-03-28 01:03:58.886369 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-03-28 01:03:58.886385 | orchestrator | Saturday 28 March 2026 01:02:34 +0000 (0:00:07.967) 0:00:16.065 ******** 2026-03-28 01:03:58.886403 | orchestrator | ok: [testbed-manager] => (item=service) 2026-03-28 01:03:58.886419 | orchestrator | 2026-03-28 01:03:58.886438 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-03-28 01:03:58.886455 | orchestrator | Saturday 28 March 2026 01:02:38 +0000 (0:00:03.962) 0:00:20.028 ******** 2026-03-28 01:03:58.886507 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-03-28 01:03:58.886525 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-28 01:03:58.886541 | orchestrator | 2026-03-28 01:03:58.886558 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-03-28 01:03:58.886574 | orchestrator | Saturday 28 March 2026 01:02:42 +0000 (0:00:04.116) 0:00:24.145 ******** 2026-03-28 01:03:58.886590 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-03-28 01:03:58.886605 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-03-28 01:03:58.886615 | orchestrator | 2026-03-28 01:03:58.886624 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting/revoking user roles] *********** 2026-03-28 01:03:58.886634 | orchestrator | Saturday 28 March 2026 01:02:49 +0000 (0:00:07.440) 0:00:31.586 ******** 2026-03-28 01:03:58.886643 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-03-28 01:03:58.886653 | orchestrator | 2026-03-28 01:03:58.886662 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:03:58.886672 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 01:03:58.886683 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 01:03:58.886694 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 01:03:58.886711 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 01:03:58.886727 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 01:03:58.886784 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 01:03:58.886819 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 01:03:58.886836 | orchestrator | 2026-03-28 01:03:58.886853 | orchestrator | 2026-03-28 01:03:58.886871 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:03:58.886887 | orchestrator | Saturday 28 March 2026 01:02:55 +0000 (0:00:05.684) 0:00:37.270 ******** 2026-03-28 01:03:58.886903 | orchestrator | =============================================================================== 2026-03-28 01:03:58.886920 | orchestrator | service-ks-register : ceph-rgw | Creating/deleting endpoints ------------ 7.97s 2026-03-28 01:03:58.886963 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 7.44s 2026-03-28 01:03:58.886980 | orchestrator | service-ks-register : ceph-rgw | Granting/revoking user roles ----------- 5.68s 2026-03-28 01:03:58.886995 | orchestrator | service-ks-register : ceph-rgw | Creating/deleting services ------------- 4.66s 2026-03-28 01:03:58.887011 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.12s 2026-03-28 01:03:58.887027 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.96s 2026-03-28 01:03:58.887041 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.42s 2026-03-28 01:03:58.887057 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.86s 2026-03-28 01:03:58.887073 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.80s 2026-03-28 01:03:58.887089 | orchestrator | 2026-03-28 01:03:58.887105 | orchestrator | 2026-03-28 01:03:58.887174 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 01:03:58.887194 | orchestrator | 2026-03-28 01:03:58.887210 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 01:03:58.887227 | orchestrator | Saturday 28 March 2026 01:00:48 +0000 (0:00:00.504) 0:00:00.504 ******** 2026-03-28 01:03:58.887259 | orchestrator | ok: [testbed-manager] 2026-03-28 01:03:58.887277 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:03:58.887292 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:03:58.887309 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:03:58.887325 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:03:58.887341 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:03:58.887357 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:03:58.887374 | orchestrator | 2026-03-28 01:03:58.887390 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 01:03:58.887406 | orchestrator | Saturday 28 March 2026 01:00:49 +0000 (0:00:00.864) 0:00:01.369 ******** 2026-03-28 01:03:58.887424 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-03-28 01:03:58.887448 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-03-28 01:03:58.887465 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-03-28 01:03:58.887481 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-03-28 01:03:58.887498 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-03-28 01:03:58.887514 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-03-28 01:03:58.887530 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-03-28 01:03:58.887546 | orchestrator | 2026-03-28 01:03:58.887563 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-03-28 01:03:58.887579 | orchestrator | 2026-03-28 01:03:58.887595 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-28 01:03:58.887612 | orchestrator | Saturday 28 March 2026 01:00:50 +0000 (0:00:01.224) 0:00:02.593 ******** 2026-03-28 01:03:58.887628 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 01:03:58.887646 | orchestrator | 2026-03-28 01:03:58.887662 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-03-28 01:03:58.887679 | orchestrator | Saturday 28 March 2026 01:00:53 +0000 (0:00:02.402) 0:00:04.995 ******** 2026-03-28 01:03:58.887699 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:03:58.887719 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:03:58.887755 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:03:58.887778 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-03-28 01:03:58.887813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:03:58.887837 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:03:58.887856 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:03:58.887874 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:03:58.887891 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:03:58.887916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:03:58.888005 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:03:58.888035 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:03:58.888048 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:03:58.888064 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:03:58.888075 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:03:58.888085 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:03:58.888096 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:03:58.888115 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:03:58.888132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:03:58.888143 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-28 01:03:58.888157 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:03:58.888168 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-28 01:03:58.888178 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-28 01:03:58.888188 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:03:58.888198 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:03:58.888222 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:03:58.888234 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:03:58.888248 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:03:58.888264 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:03:58.888281 | orchestrator | 2026-03-28 01:03:58.888297 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-28 01:03:58.888313 | orchestrator | Saturday 28 March 2026 01:00:58 +0000 (0:00:05.256) 0:00:10.252 ******** 2026-03-28 01:03:58.888329 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 01:03:58.888346 | orchestrator | 2026-03-28 01:03:58.888363 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-03-28 01:03:58.888380 | orchestrator | Saturday 28 March 2026 01:01:00 +0000 (0:00:01.479) 0:00:11.731 ******** 2026-03-28 01:03:58.888398 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:03:58.888426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:03:58.888457 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-03-28 01:03:58.888478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:03:58.888503 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:03:58.888521 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:03:58.888537 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:03:58.888551 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:03:58.888578 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:03:58.888600 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:03:58.888615 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:03:58.888629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:03:58.888650 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:03:58.888663 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:03:58.888677 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:03:58.888692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:03:58.888722 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:03:58.888737 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-28 01:03:58.888751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:03:58.888765 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-28 01:03:58.888783 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-28 01:03:58.888798 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:03:58.888821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:03:58.888836 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:03:58.888844 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:03:58.888852 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:03:58.888860 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:03:58.888872 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:03:58.888880 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:03:58.888893 | orchestrator | 2026-03-28 01:03:58.888901 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-03-28 01:03:58.888909 | orchestrator | Saturday 28 March 2026 01:01:06 +0000 (0:00:06.545) 0:00:18.276 ******** 2026-03-28 01:03:58.888918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 01:03:58.888956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 01:03:58.888972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:03:58.888980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:03:58.888989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:03:58.889002 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-03-28 01:03:58.889010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:03:58.889024 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 01:03:58.889037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 01:03:58.889060 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 01:03:58.889074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:03:58.889087 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:03:58.889100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:03:58.889114 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:03:58.889134 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 01:03:58.889149 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 01:03:58.889171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 01:03:58.889182 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 01:03:58.889198 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:03:58.889207 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 01:03:58.889215 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 01:03:58.889228 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 01:03:58.889241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:03:58.889250 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-28 01:03:58.889258 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:03:58.889266 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:03:58.889274 | orchestrator | skipping: [testbed-manager] 2026-03-28 01:03:58.889287 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-28 01:03:58.889296 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:03:58.889304 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 01:03:58.889312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:03:58.889320 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-28 01:03:58.889335 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:03:58.889347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 01:03:58.889355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:03:58.889364 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:03:58.889371 | orchestrator | 2026-03-28 01:03:58.889379 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-03-28 01:03:58.889387 | orchestrator | Saturday 28 March 2026 01:01:08 +0000 (0:00:02.098) 0:00:20.375 ******** 2026-03-28 01:03:58.889395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 01:03:58.889409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 01:03:58.889418 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-03-28 01:03:58.889426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:03:58.889446 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 01:03:58.889454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:03:58.889463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:03:58.889471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 01:03:58.889484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:03:58.889493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:03:58.889501 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:03:58.889509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 01:03:58.889522 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 01:03:58.889534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:03:58.889542 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:03:58.889550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 01:03:58.889559 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:03:58.889572 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 01:03:58.889580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:03:58.889589 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 01:03:58.889601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:03:58.889613 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 01:03:58.889621 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:03:58.889629 | orchestrator | skipping: [testbed-manager] 2026-03-28 01:03:58.889637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 01:03:58.889645 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-28 01:03:58.889653 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:03:58.889787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:03:58.889807 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:03:58.889820 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 01:03:58.889845 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-28 01:03:58.889859 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:03:58.889879 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 01:03:58.889894 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 01:03:58.889908 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-28 01:03:58.889922 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:03:58.889999 | orchestrator | 2026-03-28 01:03:58.890013 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-03-28 01:03:58.890065 | orchestrator | Saturday 28 March 2026 01:01:11 +0000 (0:00:03.260) 0:00:23.635 ******** 2026-03-28 01:03:58.890133 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-03-28 01:03:58.890167 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:03:58.890177 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:03:58.890191 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:03:58.890199 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:03:58.890207 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:03:58.890215 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:03:58.890223 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:03:58.890237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:03:58.890252 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:03:58.890261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:03:58.890269 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:03:58.890281 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:03:58.890290 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:03:58.890298 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:03:58.890308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:03:58.890330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:03:58.890353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:03:58.890367 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-28 01:03:58.890382 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-28 01:03:58.890403 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:03:58.890418 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-28 01:03:58.890433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:03:58.890462 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:03:58.890476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:03:58.890491 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:03:58.890515 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:03:58.890529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:03:58.890543 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:03:58.890557 | orchestrator | 2026-03-28 01:03:58.890571 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-03-28 01:03:58.890584 | orchestrator | Saturday 28 March 2026 01:01:18 +0000 (0:00:06.925) 0:00:30.561 ******** 2026-03-28 01:03:58.890598 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 01:03:58.890611 | orchestrator | 2026-03-28 01:03:58.890625 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-03-28 01:03:58.890637 | orchestrator | Saturday 28 March 2026 01:01:19 +0000 (0:00:01.006) 0:00:31.567 ******** 2026-03-28 01:03:58.890651 | orchestrator | skipping: [testbed-manager] 2026-03-28 01:03:58.890673 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:03:58.890685 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:03:58.890693 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:03:58.890701 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:03:58.890709 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:03:58.890717 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:03:58.890724 | orchestrator | 2026-03-28 01:03:58.890732 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-03-28 01:03:58.890740 | orchestrator | Saturday 28 March 2026 01:01:20 +0000 (0:00:00.930) 0:00:32.498 ******** 2026-03-28 01:03:58.890748 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 01:03:58.890755 | orchestrator | 2026-03-28 01:03:58.890763 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-03-28 01:03:58.890771 | orchestrator | Saturday 28 March 2026 01:01:21 +0000 (0:00:00.869) 0:00:33.367 ******** 2026-03-28 01:03:58.890779 | orchestrator | [WARNING]: Skipped 2026-03-28 01:03:58.890793 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-28 01:03:58.890801 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-03-28 01:03:58.890809 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-28 01:03:58.890817 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-03-28 01:03:58.890825 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 01:03:58.890833 | orchestrator | [WARNING]: Skipped 2026-03-28 01:03:58.890840 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-28 01:03:58.890848 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-03-28 01:03:58.890856 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-28 01:03:58.890864 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-03-28 01:03:58.890871 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 01:03:58.890879 | orchestrator | [WARNING]: Skipped 2026-03-28 01:03:58.890886 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-28 01:03:58.890894 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-03-28 01:03:58.890902 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-28 01:03:58.890909 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-03-28 01:03:58.890917 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-28 01:03:58.890949 | orchestrator | [WARNING]: Skipped 2026-03-28 01:03:58.890964 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-28 01:03:58.890977 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-03-28 01:03:58.890990 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-28 01:03:58.891004 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-03-28 01:03:58.891012 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-28 01:03:58.891020 | orchestrator | [WARNING]: Skipped 2026-03-28 01:03:58.891028 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-28 01:03:58.891036 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-03-28 01:03:58.891044 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-28 01:03:58.891052 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-03-28 01:03:58.891059 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-28 01:03:58.891067 | orchestrator | [WARNING]: Skipped 2026-03-28 01:03:58.891080 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-28 01:03:58.891088 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-03-28 01:03:58.891096 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-28 01:03:58.891111 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-03-28 01:03:58.891119 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-28 01:03:58.891127 | orchestrator | [WARNING]: Skipped 2026-03-28 01:03:58.891135 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-28 01:03:58.891142 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-03-28 01:03:58.891150 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-28 01:03:58.891158 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-03-28 01:03:58.891166 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-28 01:03:58.891173 | orchestrator | 2026-03-28 01:03:58.891205 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-03-28 01:03:58.891213 | orchestrator | Saturday 28 March 2026 01:01:23 +0000 (0:00:02.144) 0:00:35.511 ******** 2026-03-28 01:03:58.891221 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-28 01:03:58.891229 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-28 01:03:58.891237 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:03:58.891245 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:03:58.891253 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-28 01:03:58.891260 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:03:58.891268 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-28 01:03:58.891276 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:03:58.891284 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-28 01:03:58.891291 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:03:58.891299 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-28 01:03:58.891307 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:03:58.891315 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-03-28 01:03:58.891323 | orchestrator | 2026-03-28 01:03:58.891331 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-03-28 01:03:58.891338 | orchestrator | Saturday 28 March 2026 01:01:41 +0000 (0:00:17.907) 0:00:53.419 ******** 2026-03-28 01:03:58.891346 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-28 01:03:58.891354 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-28 01:03:58.891362 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:03:58.891375 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:03:58.891383 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-28 01:03:58.891391 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:03:58.891399 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-28 01:03:58.891406 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:03:58.891414 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-28 01:03:58.891422 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:03:58.891430 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-28 01:03:58.891439 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:03:58.891446 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-03-28 01:03:58.891454 | orchestrator | 2026-03-28 01:03:58.891464 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-03-28 01:03:58.891477 | orchestrator | Saturday 28 March 2026 01:01:45 +0000 (0:00:03.303) 0:00:56.722 ******** 2026-03-28 01:03:58.891499 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-28 01:03:58.891512 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:03:58.891524 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-28 01:03:58.891538 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:03:58.891551 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-28 01:03:58.891566 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:03:58.891580 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-28 01:03:58.891593 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:03:58.891607 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-28 01:03:58.891621 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:03:58.891641 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-28 01:03:58.891654 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:03:58.891667 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-03-28 01:03:58.891679 | orchestrator | 2026-03-28 01:03:58.891690 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-03-28 01:03:58.891702 | orchestrator | Saturday 28 March 2026 01:01:47 +0000 (0:00:02.043) 0:00:58.766 ******** 2026-03-28 01:03:58.891715 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 01:03:58.891727 | orchestrator | 2026-03-28 01:03:58.891739 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-03-28 01:03:58.891751 | orchestrator | Saturday 28 March 2026 01:01:47 +0000 (0:00:00.842) 0:00:59.609 ******** 2026-03-28 01:03:58.891765 | orchestrator | skipping: [testbed-manager] 2026-03-28 01:03:58.891777 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:03:58.891788 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:03:58.891801 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:03:58.891812 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:03:58.891824 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:03:58.891835 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:03:58.891846 | orchestrator | 2026-03-28 01:03:58.891860 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-03-28 01:03:58.891873 | orchestrator | Saturday 28 March 2026 01:01:48 +0000 (0:00:00.896) 0:01:00.506 ******** 2026-03-28 01:03:58.891886 | orchestrator | skipping: [testbed-manager] 2026-03-28 01:03:58.891899 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:03:58.891913 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:03:58.891949 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:03:58.891963 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:03:58.891976 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:03:58.891989 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:03:58.892002 | orchestrator | 2026-03-28 01:03:58.892014 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-03-28 01:03:58.892027 | orchestrator | Saturday 28 March 2026 01:01:50 +0000 (0:00:02.107) 0:01:02.613 ******** 2026-03-28 01:03:58.892041 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-28 01:03:58.892054 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-28 01:03:58.892067 | orchestrator | skipping: [testbed-manager] 2026-03-28 01:03:58.892081 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-28 01:03:58.892106 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:03:58.892118 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:03:58.892128 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-28 01:03:58.892141 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:03:58.892152 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-28 01:03:58.892166 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:03:58.892179 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-28 01:03:58.892206 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:03:58.892222 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-28 01:03:58.892235 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:03:58.892249 | orchestrator | 2026-03-28 01:03:58.892262 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-03-28 01:03:58.892275 | orchestrator | Saturday 28 March 2026 01:01:52 +0000 (0:00:01.524) 0:01:04.137 ******** 2026-03-28 01:03:58.892288 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-28 01:03:58.892301 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:03:58.892314 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-28 01:03:58.892329 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:03:58.892343 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-28 01:03:58.892357 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:03:58.892371 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-28 01:03:58.892384 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:03:58.892397 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-28 01:03:58.892412 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:03:58.892426 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-03-28 01:03:58.892439 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-28 01:03:58.892453 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:03:58.892468 | orchestrator | 2026-03-28 01:03:58.892482 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-03-28 01:03:58.892496 | orchestrator | Saturday 28 March 2026 01:01:54 +0000 (0:00:02.004) 0:01:06.141 ******** 2026-03-28 01:03:58.892510 | orchestrator | [WARNING]: Skipped 2026-03-28 01:03:58.892525 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-03-28 01:03:58.892540 | orchestrator | due to this access issue: 2026-03-28 01:03:58.892554 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-03-28 01:03:58.892568 | orchestrator | not a directory 2026-03-28 01:03:58.892591 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 01:03:58.892608 | orchestrator | 2026-03-28 01:03:58.892622 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-03-28 01:03:58.892636 | orchestrator | Saturday 28 March 2026 01:01:56 +0000 (0:00:01.815) 0:01:07.957 ******** 2026-03-28 01:03:58.892649 | orchestrator | skipping: [testbed-manager] 2026-03-28 01:03:58.892664 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:03:58.892678 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:03:58.892693 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:03:58.892706 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:03:58.892720 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:03:58.892734 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:03:58.892759 | orchestrator | 2026-03-28 01:03:58.892773 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-03-28 01:03:58.892786 | orchestrator | Saturday 28 March 2026 01:01:56 +0000 (0:00:00.689) 0:01:08.647 ******** 2026-03-28 01:03:58.892799 | orchestrator | skipping: [testbed-manager] 2026-03-28 01:03:58.892814 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:03:58.892827 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:03:58.892841 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:03:58.892855 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:03:58.892869 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:03:58.892883 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:03:58.892897 | orchestrator | 2026-03-28 01:03:58.892911 | orchestrator | TASK [service-check-containers : prometheus | Check containers] **************** 2026-03-28 01:03:58.892947 | orchestrator | Saturday 28 March 2026 01:01:57 +0000 (0:00:01.000) 0:01:09.647 ******** 2026-03-28 01:03:58.892964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:03:58.892980 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:03:58.893010 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-03-28 01:03:58.893028 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:03:58.893051 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:03:58.893078 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:03:58.893095 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:03:58.893110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:03:58.893125 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-28 01:03:58.893148 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:03:58.893165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:03:58.893179 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:03:58.893202 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:03:58.893225 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:03:58.893241 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:03:58.893258 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:03:58.893273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:03:58.893297 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:03:58.893313 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-28 01:03:58.893335 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:03:58.893360 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-28 01:03:58.893376 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:03:58.893390 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-28 01:03:58.893413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:03:58.893428 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-28 01:03:58.893442 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:03:58.893456 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:03:58.893493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:03:58.893510 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-28 01:03:58.893526 | orchestrator | 2026-03-28 01:03:58.893541 | orchestrator | TASK [service-check-containers : prometheus | Notify handlers to restart containers] *** 2026-03-28 01:03:58.893555 | orchestrator | Saturday 28 March 2026 01:02:02 +0000 (0:00:04.483) 0:01:14.131 ******** 2026-03-28 01:03:58.893570 | orchestrator | changed: [testbed-manager] => { 2026-03-28 01:03:58.893583 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 01:03:58.893599 | orchestrator | } 2026-03-28 01:03:58.893613 | orchestrator | changed: [testbed-node-0] => { 2026-03-28 01:03:58.893627 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 01:03:58.893641 | orchestrator | } 2026-03-28 01:03:58.893655 | orchestrator | changed: [testbed-node-1] => { 2026-03-28 01:03:58.893669 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 01:03:58.893683 | orchestrator | } 2026-03-28 01:03:58.893696 | orchestrator | changed: [testbed-node-2] => { 2026-03-28 01:03:58.893710 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 01:03:58.893724 | orchestrator | } 2026-03-28 01:03:58.893739 | orchestrator | changed: [testbed-node-3] => { 2026-03-28 01:03:58.893753 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 01:03:58.893767 | orchestrator | } 2026-03-28 01:03:58.893781 | orchestrator | changed: [testbed-node-4] => { 2026-03-28 01:03:58.893796 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 01:03:58.893809 | orchestrator | } 2026-03-28 01:03:58.893822 | orchestrator | changed: [testbed-node-5] => { 2026-03-28 01:03:58.893837 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 01:03:58.893852 | orchestrator | } 2026-03-28 01:03:58.893867 | orchestrator | 2026-03-28 01:03:58.893880 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-28 01:03:58.893896 | orchestrator | Saturday 28 March 2026 01:02:03 +0000 (0:00:00.940) 0:01:15.072 ******** 2026-03-28 01:03:58.893919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 01:03:58.894007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:03:58.894078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:03:58.894096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 01:03:58.894116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:03:58.894132 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-03-28 01:03:58.894147 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 01:03:58.894174 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 01:03:58.894197 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:03:58.894211 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:03:58.894232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 01:03:58.894247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:03:58.894260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:03:58.894274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 01:03:58.894296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:03:58.894318 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:03:58.894333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 01:03:58.894346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:03:58.894360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:03:58.894381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 01:03:58.894395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-28 01:03:58.894408 | orchestrator | skipping: [testbed-manager] 2026-03-28 01:03:58.894424 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:03:58.894439 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:03:58.894454 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 01:03:58.894470 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 01:03:58.894502 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-28 01:03:58.894518 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:03:58.894531 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 01:03:58.894544 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 01:03:58.894563 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-28 01:03:58.894575 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:03:58.894587 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-28 01:03:58.894600 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-28 01:03:58.894613 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-28 01:03:58.894635 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:03:58.894649 | orchestrator | 2026-03-28 01:03:58.894662 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-03-28 01:03:58.894676 | orchestrator | Saturday 28 March 2026 01:02:05 +0000 (0:00:02.082) 0:01:17.155 ******** 2026-03-28 01:03:58.894689 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-28 01:03:58.894703 | orchestrator | skipping: [testbed-manager] 2026-03-28 01:03:58.894716 | orchestrator | 2026-03-28 01:03:58.894729 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-28 01:03:58.894747 | orchestrator | Saturday 28 March 2026 01:02:06 +0000 (0:00:01.378) 0:01:18.533 ******** 2026-03-28 01:03:58.894761 | orchestrator | 2026-03-28 01:03:58.894774 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-28 01:03:58.894787 | orchestrator | Saturday 28 March 2026 01:02:07 +0000 (0:00:00.287) 0:01:18.821 ******** 2026-03-28 01:03:58.894799 | orchestrator | 2026-03-28 01:03:58.894810 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-28 01:03:58.894822 | orchestrator | Saturday 28 March 2026 01:02:07 +0000 (0:00:00.067) 0:01:18.889 ******** 2026-03-28 01:03:58.894834 | orchestrator | 2026-03-28 01:03:58.894848 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-28 01:03:58.894861 | orchestrator | Saturday 28 March 2026 01:02:07 +0000 (0:00:00.072) 0:01:18.961 ******** 2026-03-28 01:03:58.894874 | orchestrator | 2026-03-28 01:03:58.894887 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-28 01:03:58.894899 | orchestrator | Saturday 28 March 2026 01:02:07 +0000 (0:00:00.066) 0:01:19.027 ******** 2026-03-28 01:03:58.894912 | orchestrator | 2026-03-28 01:03:58.894946 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-28 01:03:58.894960 | orchestrator | Saturday 28 March 2026 01:02:07 +0000 (0:00:00.065) 0:01:19.093 ******** 2026-03-28 01:03:58.894972 | orchestrator | 2026-03-28 01:03:58.894984 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-28 01:03:58.894995 | orchestrator | Saturday 28 March 2026 01:02:07 +0000 (0:00:00.068) 0:01:19.161 ******** 2026-03-28 01:03:58.895006 | orchestrator | 2026-03-28 01:03:58.895016 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-03-28 01:03:58.895028 | orchestrator | Saturday 28 March 2026 01:02:07 +0000 (0:00:00.100) 0:01:19.262 ******** 2026-03-28 01:03:58.895038 | orchestrator | changed: [testbed-manager] 2026-03-28 01:03:58.895050 | orchestrator | 2026-03-28 01:03:58.895062 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-03-28 01:03:58.895073 | orchestrator | Saturday 28 March 2026 01:02:29 +0000 (0:00:21.625) 0:01:40.887 ******** 2026-03-28 01:03:58.895085 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:03:58.895098 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:03:58.895109 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:03:58.895122 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:03:58.895134 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:03:58.895146 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:03:58.895159 | orchestrator | changed: [testbed-manager] 2026-03-28 01:03:58.895171 | orchestrator | 2026-03-28 01:03:58.895187 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-03-28 01:03:58.895200 | orchestrator | Saturday 28 March 2026 01:02:44 +0000 (0:00:15.592) 0:01:56.480 ******** 2026-03-28 01:03:58.895212 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:03:58.895224 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:03:58.895243 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:03:58.895256 | orchestrator | 2026-03-28 01:03:58.895268 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-03-28 01:03:58.895289 | orchestrator | Saturday 28 March 2026 01:02:55 +0000 (0:00:10.699) 0:02:07.180 ******** 2026-03-28 01:03:58.895302 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:03:58.895314 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:03:58.895326 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:03:58.895337 | orchestrator | 2026-03-28 01:03:58.895349 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-03-28 01:03:58.895361 | orchestrator | Saturday 28 March 2026 01:03:02 +0000 (0:00:07.291) 0:02:14.472 ******** 2026-03-28 01:03:58.895373 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:03:58.895386 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:03:58.895397 | orchestrator | changed: [testbed-manager] 2026-03-28 01:03:58.895409 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:03:58.895421 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:03:58.895432 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:03:58.895444 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:03:58.895456 | orchestrator | 2026-03-28 01:03:58.895467 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-03-28 01:03:58.895478 | orchestrator | Saturday 28 March 2026 01:03:18 +0000 (0:00:15.276) 0:02:29.749 ******** 2026-03-28 01:03:58.895490 | orchestrator | changed: [testbed-manager] 2026-03-28 01:03:58.895502 | orchestrator | 2026-03-28 01:03:58.895514 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-03-28 01:03:58.895526 | orchestrator | Saturday 28 March 2026 01:03:30 +0000 (0:00:12.897) 0:02:42.646 ******** 2026-03-28 01:03:58.895537 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:03:58.895549 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:03:58.895561 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:03:58.895573 | orchestrator | 2026-03-28 01:03:58.895585 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-03-28 01:03:58.895597 | orchestrator | Saturday 28 March 2026 01:03:37 +0000 (0:00:06.920) 0:02:49.567 ******** 2026-03-28 01:03:58.895609 | orchestrator | changed: [testbed-manager] 2026-03-28 01:03:58.895622 | orchestrator | 2026-03-28 01:03:58.895634 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-03-28 01:03:58.895645 | orchestrator | Saturday 28 March 2026 01:03:49 +0000 (0:00:11.321) 0:03:00.888 ******** 2026-03-28 01:03:58.895657 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:03:58.895669 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:03:58.895680 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:03:58.895691 | orchestrator | 2026-03-28 01:03:58.895704 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:03:58.895717 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-03-28 01:03:58.895730 | orchestrator | testbed-node-0 : ok=16  changed=11  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-28 01:03:58.895750 | orchestrator | testbed-node-1 : ok=16  changed=11  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-28 01:03:58.895761 | orchestrator | testbed-node-2 : ok=16  changed=11  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-28 01:03:58.895773 | orchestrator | testbed-node-3 : ok=13  changed=8  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-03-28 01:03:58.895786 | orchestrator | testbed-node-4 : ok=13  changed=8  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-03-28 01:03:58.895798 | orchestrator | testbed-node-5 : ok=13  changed=8  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-03-28 01:03:58.895809 | orchestrator | 2026-03-28 01:03:58.895822 | orchestrator | 2026-03-28 01:03:58.895843 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:03:58.895856 | orchestrator | Saturday 28 March 2026 01:03:56 +0000 (0:00:07.163) 0:03:08.051 ******** 2026-03-28 01:03:58.895866 | orchestrator | =============================================================================== 2026-03-28 01:03:58.895878 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 21.63s 2026-03-28 01:03:58.895891 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 17.91s 2026-03-28 01:03:58.895903 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 15.59s 2026-03-28 01:03:58.895916 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 15.28s 2026-03-28 01:03:58.895949 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 12.90s 2026-03-28 01:03:58.895960 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------ 11.32s 2026-03-28 01:03:58.895969 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.70s 2026-03-28 01:03:58.895980 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 7.29s 2026-03-28 01:03:58.895992 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 7.16s 2026-03-28 01:03:58.896003 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.93s 2026-03-28 01:03:58.896014 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 6.92s 2026-03-28 01:03:58.896032 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.55s 2026-03-28 01:03:58.896043 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 5.26s 2026-03-28 01:03:58.896054 | orchestrator | service-check-containers : prometheus | Check containers ---------------- 4.48s 2026-03-28 01:03:58.896065 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.30s 2026-03-28 01:03:58.896076 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 3.26s 2026-03-28 01:03:58.896087 | orchestrator | prometheus : include_tasks ---------------------------------------------- 2.40s 2026-03-28 01:03:58.896097 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 2.14s 2026-03-28 01:03:58.896107 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.11s 2026-03-28 01:03:58.896118 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS certificate --- 2.10s 2026-03-28 01:03:58.896128 | orchestrator | 2026-03-28 01:03:58 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:03:58.896140 | orchestrator | 2026-03-28 01:03:58 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:04:01.913691 | orchestrator | 2026-03-28 01:04:01 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:04:01.914549 | orchestrator | 2026-03-28 01:04:01 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:04:01.915535 | orchestrator | 2026-03-28 01:04:01 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:04:01.916511 | orchestrator | 2026-03-28 01:04:01 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:04:01.916549 | orchestrator | 2026-03-28 01:04:01 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:04:04.953555 | orchestrator | 2026-03-28 01:04:04 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:04:04.953662 | orchestrator | 2026-03-28 01:04:04 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:04:04.953678 | orchestrator | 2026-03-28 01:04:04 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:04:04.953690 | orchestrator | 2026-03-28 01:04:04 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:04:04.953744 | orchestrator | 2026-03-28 01:04:04 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:04:07.980073 | orchestrator | 2026-03-28 01:04:07 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:04:07.980629 | orchestrator | 2026-03-28 01:04:07 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:04:07.981960 | orchestrator | 2026-03-28 01:04:07 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:04:07.982715 | orchestrator | 2026-03-28 01:04:07 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:04:07.983033 | orchestrator | 2026-03-28 01:04:07 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:04:11.019573 | orchestrator | 2026-03-28 01:04:11 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:04:11.021046 | orchestrator | 2026-03-28 01:04:11 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:04:11.021850 | orchestrator | 2026-03-28 01:04:11 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:04:11.023345 | orchestrator | 2026-03-28 01:04:11 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:04:11.023446 | orchestrator | 2026-03-28 01:04:11 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:04:14.147704 | orchestrator | 2026-03-28 01:04:14 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:04:14.150399 | orchestrator | 2026-03-28 01:04:14 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:04:14.153203 | orchestrator | 2026-03-28 01:04:14 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:04:14.155524 | orchestrator | 2026-03-28 01:04:14 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:04:14.155583 | orchestrator | 2026-03-28 01:04:14 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:04:17.196233 | orchestrator | 2026-03-28 01:04:17 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:04:17.200247 | orchestrator | 2026-03-28 01:04:17 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:04:17.201776 | orchestrator | 2026-03-28 01:04:17 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:04:17.203507 | orchestrator | 2026-03-28 01:04:17 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:04:17.203558 | orchestrator | 2026-03-28 01:04:17 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:04:20.251088 | orchestrator | 2026-03-28 01:04:20 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:04:20.252495 | orchestrator | 2026-03-28 01:04:20 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:04:20.253832 | orchestrator | 2026-03-28 01:04:20 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:04:20.256753 | orchestrator | 2026-03-28 01:04:20 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:04:20.256859 | orchestrator | 2026-03-28 01:04:20 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:04:23.296041 | orchestrator | 2026-03-28 01:04:23 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:04:23.296541 | orchestrator | 2026-03-28 01:04:23 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:04:23.297716 | orchestrator | 2026-03-28 01:04:23 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:04:23.298571 | orchestrator | 2026-03-28 01:04:23 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:04:23.298616 | orchestrator | 2026-03-28 01:04:23 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:04:26.338780 | orchestrator | 2026-03-28 01:04:26 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:04:26.342743 | orchestrator | 2026-03-28 01:04:26 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:04:26.343302 | orchestrator | 2026-03-28 01:04:26 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:04:26.346365 | orchestrator | 2026-03-28 01:04:26 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:04:26.346546 | orchestrator | 2026-03-28 01:04:26 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:04:29.384804 | orchestrator | 2026-03-28 01:04:29 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:04:29.386079 | orchestrator | 2026-03-28 01:04:29 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:04:29.390342 | orchestrator | 2026-03-28 01:04:29 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:04:29.394542 | orchestrator | 2026-03-28 01:04:29 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:04:29.394634 | orchestrator | 2026-03-28 01:04:29 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:04:32.436865 | orchestrator | 2026-03-28 01:04:32 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:04:32.438741 | orchestrator | 2026-03-28 01:04:32 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:04:32.441828 | orchestrator | 2026-03-28 01:04:32 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:04:32.443742 | orchestrator | 2026-03-28 01:04:32 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:04:32.443800 | orchestrator | 2026-03-28 01:04:32 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:04:35.494759 | orchestrator | 2026-03-28 01:04:35 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:04:35.496338 | orchestrator | 2026-03-28 01:04:35 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:04:35.498255 | orchestrator | 2026-03-28 01:04:35 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:04:35.500810 | orchestrator | 2026-03-28 01:04:35 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:04:35.501327 | orchestrator | 2026-03-28 01:04:35 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:04:38.547731 | orchestrator | 2026-03-28 01:04:38 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:04:38.549826 | orchestrator | 2026-03-28 01:04:38 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:04:38.551850 | orchestrator | 2026-03-28 01:04:38 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:04:38.553658 | orchestrator | 2026-03-28 01:04:38 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:04:38.553722 | orchestrator | 2026-03-28 01:04:38 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:04:41.604141 | orchestrator | 2026-03-28 01:04:41 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:04:41.606631 | orchestrator | 2026-03-28 01:04:41 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:04:41.608887 | orchestrator | 2026-03-28 01:04:41 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:04:41.610802 | orchestrator | 2026-03-28 01:04:41 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:04:41.610918 | orchestrator | 2026-03-28 01:04:41 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:04:44.650931 | orchestrator | 2026-03-28 01:04:44 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:04:44.651120 | orchestrator | 2026-03-28 01:04:44 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:04:44.652291 | orchestrator | 2026-03-28 01:04:44 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:04:44.654357 | orchestrator | 2026-03-28 01:04:44 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:04:44.654568 | orchestrator | 2026-03-28 01:04:44 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:04:47.704672 | orchestrator | 2026-03-28 01:04:47 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:04:47.705497 | orchestrator | 2026-03-28 01:04:47 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:04:47.707427 | orchestrator | 2026-03-28 01:04:47 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:04:47.708594 | orchestrator | 2026-03-28 01:04:47 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:04:47.708623 | orchestrator | 2026-03-28 01:04:47 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:04:50.758735 | orchestrator | 2026-03-28 01:04:50 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:04:50.759807 | orchestrator | 2026-03-28 01:04:50 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:04:50.760442 | orchestrator | 2026-03-28 01:04:50 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:04:50.763144 | orchestrator | 2026-03-28 01:04:50 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:04:50.763805 | orchestrator | 2026-03-28 01:04:50 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:04:53.808010 | orchestrator | 2026-03-28 01:04:53 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:04:53.808151 | orchestrator | 2026-03-28 01:04:53 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:04:53.809447 | orchestrator | 2026-03-28 01:04:53 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:04:53.810729 | orchestrator | 2026-03-28 01:04:53 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:04:53.810784 | orchestrator | 2026-03-28 01:04:53 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:04:56.877871 | orchestrator | 2026-03-28 01:04:56 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:04:56.880533 | orchestrator | 2026-03-28 01:04:56 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:04:56.883767 | orchestrator | 2026-03-28 01:04:56 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:04:56.886971 | orchestrator | 2026-03-28 01:04:56 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:04:56.887033 | orchestrator | 2026-03-28 01:04:56 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:04:59.926863 | orchestrator | 2026-03-28 01:04:59 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:04:59.928049 | orchestrator | 2026-03-28 01:04:59 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:04:59.932102 | orchestrator | 2026-03-28 01:04:59 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:04:59.933644 | orchestrator | 2026-03-28 01:04:59 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:04:59.933691 | orchestrator | 2026-03-28 01:04:59 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:05:02.972338 | orchestrator | 2026-03-28 01:05:02 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:05:02.973009 | orchestrator | 2026-03-28 01:05:02 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:05:02.973861 | orchestrator | 2026-03-28 01:05:02 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:05:02.974971 | orchestrator | 2026-03-28 01:05:02 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:05:02.975015 | orchestrator | 2026-03-28 01:05:02 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:05:06.012729 | orchestrator | 2026-03-28 01:05:06 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:05:06.015522 | orchestrator | 2026-03-28 01:05:06 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:05:06.024038 | orchestrator | 2026-03-28 01:05:06 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:05:06.027660 | orchestrator | 2026-03-28 01:05:06 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:05:06.027736 | orchestrator | 2026-03-28 01:05:06 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:05:09.111682 | orchestrator | 2026-03-28 01:05:09 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:05:09.114055 | orchestrator | 2026-03-28 01:05:09 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:05:09.117071 | orchestrator | 2026-03-28 01:05:09 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:05:09.119629 | orchestrator | 2026-03-28 01:05:09 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:05:09.119674 | orchestrator | 2026-03-28 01:05:09 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:05:12.175068 | orchestrator | 2026-03-28 01:05:12 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:05:12.177021 | orchestrator | 2026-03-28 01:05:12 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:05:12.177637 | orchestrator | 2026-03-28 01:05:12 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:05:12.178417 | orchestrator | 2026-03-28 01:05:12 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:05:12.178464 | orchestrator | 2026-03-28 01:05:12 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:05:15.250337 | orchestrator | 2026-03-28 01:05:15 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:05:15.253005 | orchestrator | 2026-03-28 01:05:15 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:05:15.254499 | orchestrator | 2026-03-28 01:05:15 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:05:15.256041 | orchestrator | 2026-03-28 01:05:15 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:05:15.256070 | orchestrator | 2026-03-28 01:05:15 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:05:18.297539 | orchestrator | 2026-03-28 01:05:18 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:05:18.298250 | orchestrator | 2026-03-28 01:05:18 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:05:18.299542 | orchestrator | 2026-03-28 01:05:18 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:05:18.300907 | orchestrator | 2026-03-28 01:05:18 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:05:18.300957 | orchestrator | 2026-03-28 01:05:18 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:05:21.331671 | orchestrator | 2026-03-28 01:05:21 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:05:21.332024 | orchestrator | 2026-03-28 01:05:21 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:05:21.332820 | orchestrator | 2026-03-28 01:05:21 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:05:21.333994 | orchestrator | 2026-03-28 01:05:21 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:05:21.334075 | orchestrator | 2026-03-28 01:05:21 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:05:24.426646 | orchestrator | 2026-03-28 01:05:24 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:05:24.431177 | orchestrator | 2026-03-28 01:05:24 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:05:24.431913 | orchestrator | 2026-03-28 01:05:24 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:05:24.433135 | orchestrator | 2026-03-28 01:05:24 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:05:24.433171 | orchestrator | 2026-03-28 01:05:24 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:05:27.460628 | orchestrator | 2026-03-28 01:05:27 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:05:27.461185 | orchestrator | 2026-03-28 01:05:27 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:05:27.462066 | orchestrator | 2026-03-28 01:05:27 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:05:27.462747 | orchestrator | 2026-03-28 01:05:27 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:05:27.462965 | orchestrator | 2026-03-28 01:05:27 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:05:30.497089 | orchestrator | 2026-03-28 01:05:30 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:05:30.498765 | orchestrator | 2026-03-28 01:05:30 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:05:30.501392 | orchestrator | 2026-03-28 01:05:30 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:05:30.503255 | orchestrator | 2026-03-28 01:05:30 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:05:30.503498 | orchestrator | 2026-03-28 01:05:30 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:05:33.553479 | orchestrator | 2026-03-28 01:05:33 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:05:33.555202 | orchestrator | 2026-03-28 01:05:33 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:05:33.556362 | orchestrator | 2026-03-28 01:05:33 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:05:33.559921 | orchestrator | 2026-03-28 01:05:33 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:05:33.559987 | orchestrator | 2026-03-28 01:05:33 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:05:36.595986 | orchestrator | 2026-03-28 01:05:36 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:05:36.596703 | orchestrator | 2026-03-28 01:05:36 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:05:36.597935 | orchestrator | 2026-03-28 01:05:36 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:05:36.598618 | orchestrator | 2026-03-28 01:05:36 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:05:36.599066 | orchestrator | 2026-03-28 01:05:36 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:05:39.642719 | orchestrator | 2026-03-28 01:05:39 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:05:39.643967 | orchestrator | 2026-03-28 01:05:39 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:05:39.645209 | orchestrator | 2026-03-28 01:05:39 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:05:39.647066 | orchestrator | 2026-03-28 01:05:39 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:05:39.647122 | orchestrator | 2026-03-28 01:05:39 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:05:42.691740 | orchestrator | 2026-03-28 01:05:42 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:05:42.692539 | orchestrator | 2026-03-28 01:05:42 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:05:42.695736 | orchestrator | 2026-03-28 01:05:42 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:05:42.698129 | orchestrator | 2026-03-28 01:05:42 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:05:42.698162 | orchestrator | 2026-03-28 01:05:42 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:05:45.733500 | orchestrator | 2026-03-28 01:05:45 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:05:45.737046 | orchestrator | 2026-03-28 01:05:45 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:05:45.740811 | orchestrator | 2026-03-28 01:05:45 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:05:45.745314 | orchestrator | 2026-03-28 01:05:45 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:05:45.745974 | orchestrator | 2026-03-28 01:05:45 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:05:48.785982 | orchestrator | 2026-03-28 01:05:48 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:05:48.786774 | orchestrator | 2026-03-28 01:05:48 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:05:48.790151 | orchestrator | 2026-03-28 01:05:48 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:05:48.791500 | orchestrator | 2026-03-28 01:05:48 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:05:48.791540 | orchestrator | 2026-03-28 01:05:48 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:05:51.830625 | orchestrator | 2026-03-28 01:05:51 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:05:51.831550 | orchestrator | 2026-03-28 01:05:51 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:05:51.832769 | orchestrator | 2026-03-28 01:05:51 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state STARTED 2026-03-28 01:05:51.833679 | orchestrator | 2026-03-28 01:05:51 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:05:51.833721 | orchestrator | 2026-03-28 01:05:51 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:05:54.903670 | orchestrator | 2026-03-28 01:05:54 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:05:54.903813 | orchestrator | 2026-03-28 01:05:54 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:05:54.903837 | orchestrator | 2026-03-28 01:05:54 | INFO  | Task 543e18d2-b17d-435f-b1b6-284401c9eb99 is in state SUCCESS 2026-03-28 01:05:54.903842 | orchestrator | 2026-03-28 01:05:54 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:05:54.903847 | orchestrator | 2026-03-28 01:05:54 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:05:54.904581 | orchestrator | 2026-03-28 01:05:54.904611 | orchestrator | 2026-03-28 01:05:54.904616 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 01:05:54.904621 | orchestrator | 2026-03-28 01:05:54.904625 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 01:05:54.904629 | orchestrator | Saturday 28 March 2026 01:02:18 +0000 (0:00:00.343) 0:00:00.343 ******** 2026-03-28 01:05:54.904634 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:05:54.904639 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:05:54.904643 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:05:54.904647 | orchestrator | 2026-03-28 01:05:54.904651 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 01:05:54.904655 | orchestrator | Saturday 28 March 2026 01:02:18 +0000 (0:00:00.370) 0:00:00.714 ******** 2026-03-28 01:05:54.904659 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-03-28 01:05:54.904663 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-03-28 01:05:54.904667 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-03-28 01:05:54.904671 | orchestrator | 2026-03-28 01:05:54.904675 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-03-28 01:05:54.904678 | orchestrator | 2026-03-28 01:05:54.904682 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-28 01:05:54.904686 | orchestrator | Saturday 28 March 2026 01:02:19 +0000 (0:00:00.332) 0:00:01.046 ******** 2026-03-28 01:05:54.904689 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:05:54.904694 | orchestrator | 2026-03-28 01:05:54.904698 | orchestrator | TASK [service-ks-register : glance | Creating/deleting services] *************** 2026-03-28 01:05:54.904704 | orchestrator | Saturday 28 March 2026 01:02:19 +0000 (0:00:00.707) 0:00:01.754 ******** 2026-03-28 01:05:54.904710 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-03-28 01:05:54.904716 | orchestrator | 2026-03-28 01:05:54.904722 | orchestrator | TASK [service-ks-register : glance | Creating/deleting endpoints] ************** 2026-03-28 01:05:54.904729 | orchestrator | Saturday 28 March 2026 01:02:25 +0000 (0:00:05.327) 0:00:07.082 ******** 2026-03-28 01:05:54.904733 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-03-28 01:05:54.904738 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-03-28 01:05:54.904741 | orchestrator | 2026-03-28 01:05:54.904745 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-03-28 01:05:54.904779 | orchestrator | Saturday 28 March 2026 01:02:33 +0000 (0:00:08.676) 0:00:15.758 ******** 2026-03-28 01:05:54.904798 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-03-28 01:05:54.904805 | orchestrator | 2026-03-28 01:05:54.904811 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-03-28 01:05:54.904855 | orchestrator | Saturday 28 March 2026 01:02:38 +0000 (0:00:04.411) 0:00:20.169 ******** 2026-03-28 01:05:54.904863 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-03-28 01:05:54.904869 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-28 01:05:54.904914 | orchestrator | 2026-03-28 01:05:54.904919 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-03-28 01:05:54.904923 | orchestrator | Saturday 28 March 2026 01:02:43 +0000 (0:00:04.836) 0:00:25.006 ******** 2026-03-28 01:05:54.904927 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-28 01:05:54.904931 | orchestrator | 2026-03-28 01:05:54.904936 | orchestrator | TASK [service-ks-register : glance | Granting/revoking user roles] ************* 2026-03-28 01:05:54.904942 | orchestrator | Saturday 28 March 2026 01:02:46 +0000 (0:00:03.593) 0:00:28.600 ******** 2026-03-28 01:05:54.904961 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-03-28 01:05:54.904975 | orchestrator | 2026-03-28 01:05:54.904987 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-03-28 01:05:54.904993 | orchestrator | Saturday 28 March 2026 01:02:51 +0000 (0:00:04.511) 0:00:33.111 ******** 2026-03-28 01:05:54.905016 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-28 01:05:54.905031 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-28 01:05:54.905047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-28 01:05:54.905053 | orchestrator | 2026-03-28 01:05:54.905059 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-28 01:05:54.905065 | orchestrator | Saturday 28 March 2026 01:02:55 +0000 (0:00:04.037) 0:00:37.148 ******** 2026-03-28 01:05:54.905074 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:05:54.905080 | orchestrator | 2026-03-28 01:05:54.905086 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-03-28 01:05:54.905092 | orchestrator | Saturday 28 March 2026 01:02:56 +0000 (0:00:00.960) 0:00:38.109 ******** 2026-03-28 01:05:54.905098 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:05:54.905104 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:05:54.905110 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:05:54.905116 | orchestrator | 2026-03-28 01:05:54.905121 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-03-28 01:05:54.905127 | orchestrator | Saturday 28 March 2026 01:03:01 +0000 (0:00:05.255) 0:00:43.364 ******** 2026-03-28 01:05:54.905132 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-03-28 01:05:54.905145 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-03-28 01:05:54.905150 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-03-28 01:05:54.905155 | orchestrator | 2026-03-28 01:05:54.905161 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-03-28 01:05:54.905166 | orchestrator | Saturday 28 March 2026 01:03:03 +0000 (0:00:02.017) 0:00:45.382 ******** 2026-03-28 01:05:54.905171 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-03-28 01:05:54.905177 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-03-28 01:05:54.905182 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-03-28 01:05:54.905188 | orchestrator | 2026-03-28 01:05:54.905194 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-03-28 01:05:54.905199 | orchestrator | Saturday 28 March 2026 01:03:05 +0000 (0:00:02.263) 0:00:47.646 ******** 2026-03-28 01:05:54.905205 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:05:54.905211 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:05:54.905216 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:05:54.905222 | orchestrator | 2026-03-28 01:05:54.905233 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-03-28 01:05:54.905239 | orchestrator | Saturday 28 March 2026 01:03:07 +0000 (0:00:01.369) 0:00:49.015 ******** 2026-03-28 01:05:54.905245 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:05:54.905250 | orchestrator | 2026-03-28 01:05:54.905255 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-03-28 01:05:54.905262 | orchestrator | Saturday 28 March 2026 01:03:07 +0000 (0:00:00.252) 0:00:49.268 ******** 2026-03-28 01:05:54.905268 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:05:54.905273 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:05:54.905279 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:05:54.905285 | orchestrator | 2026-03-28 01:05:54.905292 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-28 01:05:54.905298 | orchestrator | Saturday 28 March 2026 01:03:07 +0000 (0:00:00.332) 0:00:49.600 ******** 2026-03-28 01:05:54.905304 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:05:54.905311 | orchestrator | 2026-03-28 01:05:54.905317 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-03-28 01:05:54.905323 | orchestrator | Saturday 28 March 2026 01:03:08 +0000 (0:00:00.947) 0:00:50.548 ******** 2026-03-28 01:05:54.905336 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-28 01:05:54.905353 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-28 01:05:54.905361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-28 01:05:54.905373 | orchestrator | 2026-03-28 01:05:54.905379 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-03-28 01:05:54.905385 | orchestrator | Saturday 28 March 2026 01:03:13 +0000 (0:00:05.166) 0:00:55.714 ******** 2026-03-28 01:05:54.905396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-28 01:05:54.905404 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:05:54.905414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-28 01:05:54.905421 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:05:54.905432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-28 01:05:54.905444 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:05:54.905450 | orchestrator | 2026-03-28 01:05:54.905456 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-03-28 01:05:54.905463 | orchestrator | Saturday 28 March 2026 01:03:16 +0000 (0:00:03.096) 0:00:58.811 ******** 2026-03-28 01:05:54.905481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-28 01:05:54.905487 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:05:54.905498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-28 01:05:54.905509 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:05:54.905520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-28 01:05:54.905526 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:05:54.905532 | orchestrator | 2026-03-28 01:05:54.905538 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-03-28 01:05:54.905544 | orchestrator | Saturday 28 March 2026 01:03:20 +0000 (0:00:03.955) 0:01:02.766 ******** 2026-03-28 01:05:54.905550 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:05:54.905556 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:05:54.905561 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:05:54.905567 | orchestrator | 2026-03-28 01:05:54.905573 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-03-28 01:05:54.905578 | orchestrator | Saturday 28 March 2026 01:03:25 +0000 (0:00:04.953) 0:01:07.719 ******** 2026-03-28 01:05:54.905590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-28 01:05:54.905607 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-28 01:05:54.905616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-28 01:05:54.905627 | orchestrator | 2026-03-28 01:05:54.905756 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-03-28 01:05:54.905765 | orchestrator | Saturday 28 March 2026 01:03:30 +0000 (0:00:04.465) 0:01:12.185 ******** 2026-03-28 01:05:54.905770 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:05:54.905776 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:05:54.905782 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:05:54.905788 | orchestrator | 2026-03-28 01:05:54.905793 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-03-28 01:05:54.905799 | orchestrator | Saturday 28 March 2026 01:03:38 +0000 (0:00:08.371) 0:01:20.557 ******** 2026-03-28 01:05:54.905806 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:05:54.905812 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:05:54.905842 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:05:54.905848 | orchestrator | 2026-03-28 01:05:54.905854 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-03-28 01:05:54.905860 | orchestrator | Saturday 28 March 2026 01:03:42 +0000 (0:00:04.267) 0:01:24.824 ******** 2026-03-28 01:05:54.905865 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:05:54.905872 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:05:54.905877 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:05:54.905881 | orchestrator | 2026-03-28 01:05:54.905884 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-03-28 01:05:54.905888 | orchestrator | Saturday 28 March 2026 01:03:47 +0000 (0:00:04.438) 0:01:29.263 ******** 2026-03-28 01:05:54.905892 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:05:54.905895 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:05:54.905899 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:05:54.905903 | orchestrator | 2026-03-28 01:05:54.905906 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-03-28 01:05:54.905910 | orchestrator | Saturday 28 March 2026 01:03:52 +0000 (0:00:04.615) 0:01:33.878 ******** 2026-03-28 01:05:54.905914 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:05:54.905918 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:05:54.905921 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:05:54.905925 | orchestrator | 2026-03-28 01:05:54.905929 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-03-28 01:05:54.905932 | orchestrator | Saturday 28 March 2026 01:03:52 +0000 (0:00:00.357) 0:01:34.236 ******** 2026-03-28 01:05:54.905936 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-28 01:05:54.905941 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:05:54.905944 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-28 01:05:54.905948 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:05:54.905956 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-28 01:05:54.905966 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:05:54.905970 | orchestrator | 2026-03-28 01:05:54.905974 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-03-28 01:05:54.905978 | orchestrator | Saturday 28 March 2026 01:03:56 +0000 (0:00:04.498) 0:01:38.734 ******** 2026-03-28 01:05:54.905981 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:05:54.905985 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:05:54.905989 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:05:54.905992 | orchestrator | 2026-03-28 01:05:54.905996 | orchestrator | TASK [glance : Generating 'hostid' file for glance_api] ************************ 2026-03-28 01:05:54.906000 | orchestrator | Saturday 28 March 2026 01:04:01 +0000 (0:00:04.614) 0:01:43.349 ******** 2026-03-28 01:05:54.906003 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:05:54.906007 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:05:54.906011 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:05:54.906058 | orchestrator | 2026-03-28 01:05:54.906065 | orchestrator | TASK [service-check-containers : glance | Check containers] ******************** 2026-03-28 01:05:54.906071 | orchestrator | Saturday 28 March 2026 01:04:05 +0000 (0:00:04.279) 0:01:47.629 ******** 2026-03-28 01:05:54.906086 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-28 01:05:54.906098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-28 01:05:54.906114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-28 01:05:54.906121 | orchestrator | 2026-03-28 01:05:54.906127 | orchestrator | TASK [service-check-containers : glance | Notify handlers to restart containers] *** 2026-03-28 01:05:54.906134 | orchestrator | Saturday 28 March 2026 01:04:10 +0000 (0:00:04.961) 0:01:52.591 ******** 2026-03-28 01:05:54.906140 | orchestrator | changed: [testbed-node-0] => { 2026-03-28 01:05:54.906146 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 01:05:54.906152 | orchestrator | } 2026-03-28 01:05:54.906158 | orchestrator | changed: [testbed-node-1] => { 2026-03-28 01:05:54.906162 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 01:05:54.906166 | orchestrator | } 2026-03-28 01:05:54.906173 | orchestrator | changed: [testbed-node-2] => { 2026-03-28 01:05:54.906176 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 01:05:54.906180 | orchestrator | } 2026-03-28 01:05:54.906184 | orchestrator | 2026-03-28 01:05:54.906188 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-28 01:05:54.906192 | orchestrator | Saturday 28 March 2026 01:04:11 +0000 (0:00:00.644) 0:01:53.236 ******** 2026-03-28 01:05:54.906199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-28 01:05:54.906207 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:05:54.906211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-28 01:05:54.906215 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:05:54.906223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-28 01:05:54.906230 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:05:54.906234 | orchestrator | 2026-03-28 01:05:54.906238 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-28 01:05:54.906242 | orchestrator | Saturday 28 March 2026 01:04:15 +0000 (0:00:04.463) 0:01:57.699 ******** 2026-03-28 01:05:54.906245 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:05:54.906249 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:05:54.906253 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:05:54.906256 | orchestrator | 2026-03-28 01:05:54.906260 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-03-28 01:05:54.906266 | orchestrator | Saturday 28 March 2026 01:04:16 +0000 (0:00:00.302) 0:01:58.002 ******** 2026-03-28 01:05:54.906270 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:05:54.906274 | orchestrator | 2026-03-28 01:05:54.906278 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-03-28 01:05:54.906281 | orchestrator | Saturday 28 March 2026 01:04:18 +0000 (0:00:02.424) 0:02:00.426 ******** 2026-03-28 01:05:54.906285 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:05:54.906289 | orchestrator | 2026-03-28 01:05:54.906293 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-03-28 01:05:54.906296 | orchestrator | Saturday 28 March 2026 01:04:21 +0000 (0:00:02.622) 0:02:03.049 ******** 2026-03-28 01:05:54.906300 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:05:54.906304 | orchestrator | 2026-03-28 01:05:54.906307 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-03-28 01:05:54.906311 | orchestrator | Saturday 28 March 2026 01:04:23 +0000 (0:00:02.522) 0:02:05.572 ******** 2026-03-28 01:05:54.906315 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:05:54.906319 | orchestrator | 2026-03-28 01:05:54.906322 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-03-28 01:05:54.906326 | orchestrator | Saturday 28 March 2026 01:04:53 +0000 (0:00:29.763) 0:02:35.335 ******** 2026-03-28 01:05:54.906330 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:05:54.906333 | orchestrator | 2026-03-28 01:05:54.906337 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-28 01:05:54.906341 | orchestrator | Saturday 28 March 2026 01:04:56 +0000 (0:00:02.630) 0:02:37.965 ******** 2026-03-28 01:05:54.906344 | orchestrator | 2026-03-28 01:05:54.906348 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-28 01:05:54.906352 | orchestrator | Saturday 28 March 2026 01:04:56 +0000 (0:00:00.082) 0:02:38.048 ******** 2026-03-28 01:05:54.906355 | orchestrator | 2026-03-28 01:05:54.906359 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-28 01:05:54.906363 | orchestrator | Saturday 28 March 2026 01:04:56 +0000 (0:00:00.072) 0:02:38.120 ******** 2026-03-28 01:05:54.906366 | orchestrator | 2026-03-28 01:05:54.906370 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-03-28 01:05:54.906374 | orchestrator | Saturday 28 March 2026 01:04:56 +0000 (0:00:00.074) 0:02:38.195 ******** 2026-03-28 01:05:54.906378 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:05:54.906382 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:05:54.906388 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:05:54.906394 | orchestrator | 2026-03-28 01:05:54.906408 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:05:54.906419 | orchestrator | testbed-node-0 : ok=27  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2026-03-28 01:05:54.906427 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-03-28 01:05:54.906433 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-03-28 01:05:54.906439 | orchestrator | 2026-03-28 01:05:54.906444 | orchestrator | 2026-03-28 01:05:54.906454 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:05:54.906461 | orchestrator | Saturday 28 March 2026 01:05:53 +0000 (0:00:57.029) 0:03:35.224 ******** 2026-03-28 01:05:54.906467 | orchestrator | =============================================================================== 2026-03-28 01:05:54.906473 | orchestrator | glance : Restart glance-api container ---------------------------------- 57.03s 2026-03-28 01:05:54.906480 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 29.76s 2026-03-28 01:05:54.906484 | orchestrator | service-ks-register : glance | Creating/deleting endpoints -------------- 8.68s 2026-03-28 01:05:54.906488 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 8.37s 2026-03-28 01:05:54.906493 | orchestrator | service-ks-register : glance | Creating/deleting services --------------- 5.33s 2026-03-28 01:05:54.906497 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 5.25s 2026-03-28 01:05:54.906501 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 5.17s 2026-03-28 01:05:54.906505 | orchestrator | service-check-containers : glance | Check containers -------------------- 4.96s 2026-03-28 01:05:54.906510 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 4.95s 2026-03-28 01:05:54.906514 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.84s 2026-03-28 01:05:54.906518 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 4.62s 2026-03-28 01:05:54.906523 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 4.61s 2026-03-28 01:05:54.906527 | orchestrator | service-ks-register : glance | Granting/revoking user roles ------------- 4.51s 2026-03-28 01:05:54.906531 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 4.50s 2026-03-28 01:05:54.906535 | orchestrator | glance : Copying over config.json files for services -------------------- 4.47s 2026-03-28 01:05:54.906540 | orchestrator | service-check-containers : Include tasks -------------------------------- 4.46s 2026-03-28 01:05:54.906544 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 4.44s 2026-03-28 01:05:54.906548 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 4.41s 2026-03-28 01:05:54.906553 | orchestrator | glance : Generating 'hostid' file for glance_api ------------------------ 4.28s 2026-03-28 01:05:54.906557 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 4.27s 2026-03-28 01:05:57.914875 | orchestrator | 2026-03-28 01:05:57 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:05:57.915393 | orchestrator | 2026-03-28 01:05:57 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:05:57.916601 | orchestrator | 2026-03-28 01:05:57 | INFO  | Task 42e1ff8c-99ad-496f-92c4-2816b63e9602 is in state STARTED 2026-03-28 01:05:57.918918 | orchestrator | 2026-03-28 01:05:57 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:05:57.918964 | orchestrator | 2026-03-28 01:05:57 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:06:00.949163 | orchestrator | 2026-03-28 01:06:00 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:06:00.950532 | orchestrator | 2026-03-28 01:06:00 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:06:00.953020 | orchestrator | 2026-03-28 01:06:00 | INFO  | Task 42e1ff8c-99ad-496f-92c4-2816b63e9602 is in state STARTED 2026-03-28 01:06:00.953666 | orchestrator | 2026-03-28 01:06:00 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:06:00.953780 | orchestrator | 2026-03-28 01:06:00 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:06:03.981521 | orchestrator | 2026-03-28 01:06:03 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:06:03.981728 | orchestrator | 2026-03-28 01:06:03 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:06:03.983378 | orchestrator | 2026-03-28 01:06:03 | INFO  | Task 42e1ff8c-99ad-496f-92c4-2816b63e9602 is in state STARTED 2026-03-28 01:06:03.985219 | orchestrator | 2026-03-28 01:06:03 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:06:03.985283 | orchestrator | 2026-03-28 01:06:03 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:06:07.031423 | orchestrator | 2026-03-28 01:06:07 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:06:07.032116 | orchestrator | 2026-03-28 01:06:07 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:06:07.034066 | orchestrator | 2026-03-28 01:06:07 | INFO  | Task 42e1ff8c-99ad-496f-92c4-2816b63e9602 is in state STARTED 2026-03-28 01:06:07.035176 | orchestrator | 2026-03-28 01:06:07 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:06:07.035214 | orchestrator | 2026-03-28 01:06:07 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:06:10.073454 | orchestrator | 2026-03-28 01:06:10 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:06:10.075539 | orchestrator | 2026-03-28 01:06:10 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:06:10.077419 | orchestrator | 2026-03-28 01:06:10 | INFO  | Task 42e1ff8c-99ad-496f-92c4-2816b63e9602 is in state STARTED 2026-03-28 01:06:10.079149 | orchestrator | 2026-03-28 01:06:10 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:06:10.079604 | orchestrator | 2026-03-28 01:06:10 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:06:13.112676 | orchestrator | 2026-03-28 01:06:13 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:06:13.114345 | orchestrator | 2026-03-28 01:06:13 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:06:13.117110 | orchestrator | 2026-03-28 01:06:13 | INFO  | Task 42e1ff8c-99ad-496f-92c4-2816b63e9602 is in state STARTED 2026-03-28 01:06:13.118115 | orchestrator | 2026-03-28 01:06:13 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:06:13.118161 | orchestrator | 2026-03-28 01:06:13 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:06:16.165401 | orchestrator | 2026-03-28 01:06:16 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:06:16.166489 | orchestrator | 2026-03-28 01:06:16 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:06:16.167986 | orchestrator | 2026-03-28 01:06:16 | INFO  | Task 42e1ff8c-99ad-496f-92c4-2816b63e9602 is in state STARTED 2026-03-28 01:06:16.170920 | orchestrator | 2026-03-28 01:06:16 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:06:16.170990 | orchestrator | 2026-03-28 01:06:16 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:06:19.201306 | orchestrator | 2026-03-28 01:06:19 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:06:19.202444 | orchestrator | 2026-03-28 01:06:19 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:06:19.203560 | orchestrator | 2026-03-28 01:06:19 | INFO  | Task 42e1ff8c-99ad-496f-92c4-2816b63e9602 is in state STARTED 2026-03-28 01:06:19.205062 | orchestrator | 2026-03-28 01:06:19 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:06:19.205353 | orchestrator | 2026-03-28 01:06:19 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:06:22.244393 | orchestrator | 2026-03-28 01:06:22 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:06:22.247181 | orchestrator | 2026-03-28 01:06:22 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:06:22.248004 | orchestrator | 2026-03-28 01:06:22 | INFO  | Task 42e1ff8c-99ad-496f-92c4-2816b63e9602 is in state STARTED 2026-03-28 01:06:22.249022 | orchestrator | 2026-03-28 01:06:22 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:06:22.249243 | orchestrator | 2026-03-28 01:06:22 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:06:25.333470 | orchestrator | 2026-03-28 01:06:25 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:06:25.335495 | orchestrator | 2026-03-28 01:06:25 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:06:25.337737 | orchestrator | 2026-03-28 01:06:25 | INFO  | Task 42e1ff8c-99ad-496f-92c4-2816b63e9602 is in state STARTED 2026-03-28 01:06:25.340281 | orchestrator | 2026-03-28 01:06:25 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:06:25.340316 | orchestrator | 2026-03-28 01:06:25 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:06:28.391058 | orchestrator | 2026-03-28 01:06:28 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:06:28.392110 | orchestrator | 2026-03-28 01:06:28 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state STARTED 2026-03-28 01:06:28.394529 | orchestrator | 2026-03-28 01:06:28 | INFO  | Task 42e1ff8c-99ad-496f-92c4-2816b63e9602 is in state STARTED 2026-03-28 01:06:28.397411 | orchestrator | 2026-03-28 01:06:28 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:06:28.397479 | orchestrator | 2026-03-28 01:06:28 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:06:31.455413 | orchestrator | 2026-03-28 01:06:31 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:06:31.458166 | orchestrator | 2026-03-28 01:06:31.458238 | orchestrator | 2026-03-28 01:06:31 | INFO  | Task 78672679-2677-423b-8a3d-1dc8008e73ca is in state SUCCESS 2026-03-28 01:06:31.459572 | orchestrator | 2026-03-28 01:06:31.459612 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 01:06:31.459618 | orchestrator | 2026-03-28 01:06:31.459624 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 01:06:31.459631 | orchestrator | Saturday 28 March 2026 01:02:44 +0000 (0:00:00.351) 0:00:00.351 ******** 2026-03-28 01:06:31.459637 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:06:31.459644 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:06:31.459649 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:06:31.459655 | orchestrator | 2026-03-28 01:06:31.459661 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 01:06:31.459667 | orchestrator | Saturday 28 March 2026 01:02:45 +0000 (0:00:00.320) 0:00:00.672 ******** 2026-03-28 01:06:31.459690 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-03-28 01:06:31.459696 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-03-28 01:06:31.459701 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-03-28 01:06:31.459705 | orchestrator | 2026-03-28 01:06:31.459710 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-03-28 01:06:31.459715 | orchestrator | 2026-03-28 01:06:31.459719 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-28 01:06:31.459724 | orchestrator | Saturday 28 March 2026 01:02:45 +0000 (0:00:00.376) 0:00:01.048 ******** 2026-03-28 01:06:31.459728 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:06:31.459734 | orchestrator | 2026-03-28 01:06:31.459739 | orchestrator | TASK [service-ks-register : cinder | Creating/deleting services] *************** 2026-03-28 01:06:31.459743 | orchestrator | Saturday 28 March 2026 01:02:46 +0000 (0:00:01.127) 0:00:02.176 ******** 2026-03-28 01:06:31.459749 | orchestrator | changed: [testbed-node-0] => (item=cinder (block-storage)) 2026-03-28 01:06:31.459754 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-03-28 01:06:31.459761 | orchestrator | 2026-03-28 01:06:31.459769 | orchestrator | TASK [service-ks-register : cinder | Creating/deleting endpoints] ************** 2026-03-28 01:06:31.459794 | orchestrator | Saturday 28 March 2026 01:02:54 +0000 (0:00:07.776) 0:00:09.953 ******** 2026-03-28 01:06:31.459802 | orchestrator | changed: [testbed-node-0] => (item=cinder -> https://api-int.testbed.osism.xyz:8776/v3 -> internal) 2026-03-28 01:06:31.459851 | orchestrator | changed: [testbed-node-0] => (item=cinder -> https://api.testbed.osism.xyz:8776/v3 -> public) 2026-03-28 01:06:31.459860 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-03-28 01:06:31.459868 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-03-28 01:06:31.459876 | orchestrator | 2026-03-28 01:06:31.459951 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-03-28 01:06:31.459958 | orchestrator | Saturday 28 March 2026 01:03:09 +0000 (0:00:15.016) 0:00:24.969 ******** 2026-03-28 01:06:31.459963 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-28 01:06:31.459968 | orchestrator | 2026-03-28 01:06:31.459972 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-03-28 01:06:31.459978 | orchestrator | Saturday 28 March 2026 01:03:13 +0000 (0:00:03.775) 0:00:28.744 ******** 2026-03-28 01:06:31.459986 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-03-28 01:06:31.459994 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-28 01:06:31.460001 | orchestrator | 2026-03-28 01:06:31.460008 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-03-28 01:06:31.460016 | orchestrator | Saturday 28 March 2026 01:03:17 +0000 (0:00:04.494) 0:00:33.238 ******** 2026-03-28 01:06:31.460024 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-28 01:06:31.460032 | orchestrator | 2026-03-28 01:06:31.460039 | orchestrator | TASK [service-ks-register : cinder | Granting/revoking user roles] ************* 2026-03-28 01:06:31.460047 | orchestrator | Saturday 28 March 2026 01:03:21 +0000 (0:00:03.895) 0:00:37.133 ******** 2026-03-28 01:06:31.460053 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-03-28 01:06:31.460058 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-03-28 01:06:31.460062 | orchestrator | 2026-03-28 01:06:31.460067 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-03-28 01:06:31.460071 | orchestrator | Saturday 28 March 2026 01:03:29 +0000 (0:00:08.369) 0:00:45.503 ******** 2026-03-28 01:06:31.460093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:06:31.460221 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:31.460237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:06:31.460246 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:31.460274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:06:31.460292 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:31.460310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:31.460319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:31.460332 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:31.460342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:31.460349 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:31.460362 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:31.460371 | orchestrator | 2026-03-28 01:06:31.460383 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-28 01:06:31.460393 | orchestrator | Saturday 28 March 2026 01:03:33 +0000 (0:00:03.688) 0:00:49.192 ******** 2026-03-28 01:06:31.460403 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:06:31.460411 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:06:31.460420 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:06:31.460428 | orchestrator | 2026-03-28 01:06:31.460436 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-28 01:06:31.460445 | orchestrator | Saturday 28 March 2026 01:03:34 +0000 (0:00:00.445) 0:00:49.637 ******** 2026-03-28 01:06:31.460454 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:06:31.460462 | orchestrator | 2026-03-28 01:06:31.460492 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-03-28 01:06:31.460497 | orchestrator | Saturday 28 March 2026 01:03:34 +0000 (0:00:00.886) 0:00:50.524 ******** 2026-03-28 01:06:31.460502 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-03-28 01:06:31.460507 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-03-28 01:06:31.460512 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-03-28 01:06:31.460516 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-03-28 01:06:31.460521 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-03-28 01:06:31.460525 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-03-28 01:06:31.460530 | orchestrator | 2026-03-28 01:06:31.460534 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-03-28 01:06:31.460539 | orchestrator | Saturday 28 March 2026 01:03:37 +0000 (0:00:02.538) 0:00:53.062 ******** 2026-03-28 01:06:31.460549 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-03-28 01:06:31.460560 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-03-28 01:06:31.460572 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-03-28 01:06:31.460578 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-03-28 01:06:31.460587 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-03-28 01:06:31.460592 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-03-28 01:06:31.460601 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-03-28 01:06:31.460609 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-03-28 01:06:31.460617 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-03-28 01:06:31.460623 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-03-28 01:06:31.460633 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-03-28 01:06:31.460642 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-03-28 01:06:31.460648 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-03-28 01:06:31.460658 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-03-28 01:06:31.460663 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-03-28 01:06:31.460676 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-03-28 01:06:31.460686 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-03-28 01:06:31.460691 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-03-28 01:06:31.460696 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-03-28 01:06:31.460705 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-03-28 01:06:31.460765 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-03-28 01:06:31.461234 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-03-28 01:06:31.461263 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-03-28 01:06:31.461280 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-03-28 01:06:31.461301 | orchestrator | 2026-03-28 01:06:31.461306 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-03-28 01:06:31.461311 | orchestrator | Saturday 28 March 2026 01:03:44 +0000 (0:00:07.124) 0:01:00.187 ******** 2026-03-28 01:06:31.461316 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-03-28 01:06:31.461323 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-03-28 01:06:31.461328 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-03-28 01:06:31.461332 | orchestrator | 2026-03-28 01:06:31.461337 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-03-28 01:06:31.461341 | orchestrator | Saturday 28 March 2026 01:03:46 +0000 (0:00:02.178) 0:01:02.366 ******** 2026-03-28 01:06:31.461346 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-03-28 01:06:31.461350 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-03-28 01:06:31.461355 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-03-28 01:06:31.461360 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}) 2026-03-28 01:06:31.461364 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}) 2026-03-28 01:06:31.461369 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}) 2026-03-28 01:06:31.461373 | orchestrator | 2026-03-28 01:06:31.461392 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-03-28 01:06:31.461397 | orchestrator | Saturday 28 March 2026 01:03:50 +0000 (0:00:03.485) 0:01:05.851 ******** 2026-03-28 01:06:31.461402 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-03-28 01:06:31.461407 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-03-28 01:06:31.461412 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-03-28 01:06:31.461423 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-03-28 01:06:31.461428 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-03-28 01:06:31.461433 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-03-28 01:06:31.461437 | orchestrator | 2026-03-28 01:06:31.461443 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-03-28 01:06:31.461450 | orchestrator | Saturday 28 March 2026 01:03:51 +0000 (0:00:01.525) 0:01:07.377 ******** 2026-03-28 01:06:31.461460 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:06:31.461471 | orchestrator | 2026-03-28 01:06:31.461478 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-03-28 01:06:31.461485 | orchestrator | Saturday 28 March 2026 01:03:52 +0000 (0:00:00.370) 0:01:07.747 ******** 2026-03-28 01:06:31.461492 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:06:31.461499 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:06:31.461506 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:06:31.461513 | orchestrator | 2026-03-28 01:06:31.461520 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-28 01:06:31.461535 | orchestrator | Saturday 28 March 2026 01:03:52 +0000 (0:00:00.344) 0:01:08.092 ******** 2026-03-28 01:06:31.461543 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:06:31.461551 | orchestrator | 2026-03-28 01:06:31.461559 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-03-28 01:06:31.461567 | orchestrator | Saturday 28 March 2026 01:03:53 +0000 (0:00:00.677) 0:01:08.770 ******** 2026-03-28 01:06:31.461583 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:06:31.461592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:06:31.461607 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:06:31.461616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:31.461633 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:31.461641 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:31.461647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:31.461653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:31.461657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:31.461666 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:31.461675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:31.461683 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:31.461688 | orchestrator | 2026-03-28 01:06:31.461693 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-03-28 01:06:31.461698 | orchestrator | Saturday 28 March 2026 01:03:58 +0000 (0:00:05.108) 0:01:13.879 ******** 2026-03-28 01:06:31.461703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:06:31.461708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 01:06:31.461717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-28 01:06:31.461726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-28 01:06:31.461731 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:06:31.461739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:06:31.461744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 01:06:31.461750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-28 01:06:31.461759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-28 01:06:31.461768 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:06:31.461774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:06:31.461808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 01:06:31.461816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-28 01:06:31.461822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-28 01:06:31.461827 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:06:31.461832 | orchestrator | 2026-03-28 01:06:31.461838 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-03-28 01:06:31.461844 | orchestrator | Saturday 28 March 2026 01:04:00 +0000 (0:00:01.985) 0:01:15.865 ******** 2026-03-28 01:06:31.461854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:06:31.461882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 01:06:31.461897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-28 01:06:31.461904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-28 01:06:31.461909 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:06:31.461915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:06:31.461921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 01:06:31.461934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-28 01:06:31.461940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-28 01:06:31.461945 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:06:31.461954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:06:31.461960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 01:06:31.461966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-28 01:06:31.461983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-28 01:06:31.461991 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:06:31.461998 | orchestrator | 2026-03-28 01:06:31.462006 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-03-28 01:06:31.462013 | orchestrator | Saturday 28 March 2026 01:04:01 +0000 (0:00:01.389) 0:01:17.255 ******** 2026-03-28 01:06:31.462087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:06:31.462103 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:06:31.462112 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:06:31.462132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:31.462141 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:31.462149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:31.462161 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:31.462167 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:31.462171 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:31.462184 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:31.462189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:31.462193 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:31.462198 | orchestrator | 2026-03-28 01:06:31.462203 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-03-28 01:06:31.462208 | orchestrator | Saturday 28 March 2026 01:04:06 +0000 (0:00:05.051) 0:01:22.306 ******** 2026-03-28 01:06:31.462215 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2026-03-28 01:06:31.462221 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:06:31.462226 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2026-03-28 01:06:31.462230 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:06:31.462235 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2026-03-28 01:06:31.462239 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:06:31.462244 | orchestrator | 2026-03-28 01:06:31.462248 | orchestrator | TASK [Configure uWSGI for Cinder] ********************************************** 2026-03-28 01:06:31.462253 | orchestrator | Saturday 28 March 2026 01:04:08 +0000 (0:00:01.432) 0:01:23.739 ******** 2026-03-28 01:06:31.462257 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:06:31.462262 | orchestrator | 2026-03-28 01:06:31.462267 | orchestrator | TASK [service-uwsgi-config : Copying over cinder-api uWSGI config] ************* 2026-03-28 01:06:31.462276 | orchestrator | Saturday 28 March 2026 01:04:09 +0000 (0:00:01.671) 0:01:25.410 ******** 2026-03-28 01:06:31.462280 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:06:31.462285 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:06:31.462289 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:06:31.462294 | orchestrator | 2026-03-28 01:06:31.462298 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-03-28 01:06:31.462303 | orchestrator | Saturday 28 March 2026 01:04:12 +0000 (0:00:02.919) 0:01:28.330 ******** 2026-03-28 01:06:31.462308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:06:31.462317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:06:31.462326 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:06:31.462331 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:31.462341 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:31.462346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:31.462354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:31.462359 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:31.462364 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:31.462371 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:31.462380 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:31.462384 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:31.462389 | orchestrator | 2026-03-28 01:06:31.462394 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-03-28 01:06:31.462399 | orchestrator | Saturday 28 March 2026 01:04:24 +0000 (0:00:12.059) 0:01:40.389 ******** 2026-03-28 01:06:31.462403 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:06:31.462408 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:06:31.462412 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:06:31.462417 | orchestrator | 2026-03-28 01:06:31.462421 | orchestrator | TASK [cinder : Generating 'hostid' file for cinder_volume] ********************* 2026-03-28 01:06:31.462426 | orchestrator | Saturday 28 March 2026 01:04:26 +0000 (0:00:01.662) 0:01:42.052 ******** 2026-03-28 01:06:31.462431 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:06:31.462439 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:06:31.462443 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:06:31.462448 | orchestrator | 2026-03-28 01:06:31.462452 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-03-28 01:06:31.462457 | orchestrator | Saturday 28 March 2026 01:04:27 +0000 (0:00:01.527) 0:01:43.579 ******** 2026-03-28 01:06:31.462462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:06:31.462472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 01:06:31.462481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-28 01:06:31.462486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-28 01:06:31.462491 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:06:31.462501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:06:31.462506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 01:06:31.462511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-28 01:06:31.462523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-28 01:06:31.462528 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:06:31.462532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:06:31.462538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 01:06:31.462546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-28 01:06:31.462551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-28 01:06:31.462560 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:06:31.462564 | orchestrator | 2026-03-28 01:06:31.462569 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-03-28 01:06:31.462574 | orchestrator | Saturday 28 March 2026 01:04:29 +0000 (0:00:01.188) 0:01:44.768 ******** 2026-03-28 01:06:31.462578 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:06:31.462583 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:06:31.462587 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:06:31.462592 | orchestrator | 2026-03-28 01:06:31.462600 | orchestrator | TASK [service-check-containers : cinder | Check containers] ******************** 2026-03-28 01:06:31.462605 | orchestrator | Saturday 28 March 2026 01:04:29 +0000 (0:00:00.403) 0:01:45.171 ******** 2026-03-28 01:06:31.462611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:06:31.462620 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:06:31.462633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:06:31.462647 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:31.462661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:31.462669 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:31.462677 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:31.462721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:31.462737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:31.462752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:31.462766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:31.462775 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-28 01:06:31.462846 | orchestrator | 2026-03-28 01:06:31.462853 | orchestrator | TASK [service-check-containers : cinder | Notify handlers to restart containers] *** 2026-03-28 01:06:31.462857 | orchestrator | Saturday 28 March 2026 01:04:33 +0000 (0:00:03.544) 0:01:48.716 ******** 2026-03-28 01:06:31.462863 | orchestrator | changed: [testbed-node-0] => { 2026-03-28 01:06:31.462868 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 01:06:31.462873 | orchestrator | } 2026-03-28 01:06:31.462878 | orchestrator | changed: [testbed-node-1] => { 2026-03-28 01:06:31.462882 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 01:06:31.462887 | orchestrator | } 2026-03-28 01:06:31.462892 | orchestrator | changed: [testbed-node-2] => { 2026-03-28 01:06:31.462896 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 01:06:31.462901 | orchestrator | } 2026-03-28 01:06:31.462906 | orchestrator | 2026-03-28 01:06:31.462910 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-28 01:06:31.462915 | orchestrator | Saturday 28 March 2026 01:04:33 +0000 (0:00:00.343) 0:01:49.059 ******** 2026-03-28 01:06:31.462925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:06:31.462936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 01:06:31.462941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-28 01:06:31.462950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-28 01:06:31.462955 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:06:31.462960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:06:31.462965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 01:06:31.462979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-28 01:06:31.462984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-28 01:06:31.462989 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:06:31.462997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:06:31.463002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 01:06:31.463007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-28 01:06:31.463019 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-28 01:06:31.463024 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:06:31.463029 | orchestrator | 2026-03-28 01:06:31.463034 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-28 01:06:31.463039 | orchestrator | Saturday 28 March 2026 01:04:34 +0000 (0:00:01.316) 0:01:50.375 ******** 2026-03-28 01:06:31.463044 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:06:31.463048 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:06:31.463053 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:06:31.463057 | orchestrator | 2026-03-28 01:06:31.463062 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-03-28 01:06:31.463066 | orchestrator | Saturday 28 March 2026 01:04:35 +0000 (0:00:00.331) 0:01:50.707 ******** 2026-03-28 01:06:31.463071 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:06:31.463076 | orchestrator | 2026-03-28 01:06:31.463080 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-03-28 01:06:31.463085 | orchestrator | Saturday 28 March 2026 01:04:37 +0000 (0:00:02.323) 0:01:53.030 ******** 2026-03-28 01:06:31.463089 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:06:31.463094 | orchestrator | 2026-03-28 01:06:31.463099 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-03-28 01:06:31.463103 | orchestrator | Saturday 28 March 2026 01:04:39 +0000 (0:00:02.200) 0:01:55.231 ******** 2026-03-28 01:06:31.463108 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:06:31.463112 | orchestrator | 2026-03-28 01:06:31.463117 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-28 01:06:31.463121 | orchestrator | Saturday 28 March 2026 01:05:00 +0000 (0:00:20.879) 0:02:16.110 ******** 2026-03-28 01:06:31.463126 | orchestrator | 2026-03-28 01:06:31.463131 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-28 01:06:31.463136 | orchestrator | Saturday 28 March 2026 01:05:00 +0000 (0:00:00.264) 0:02:16.375 ******** 2026-03-28 01:06:31.463140 | orchestrator | 2026-03-28 01:06:31.463145 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-28 01:06:31.463150 | orchestrator | Saturday 28 March 2026 01:05:01 +0000 (0:00:00.250) 0:02:16.625 ******** 2026-03-28 01:06:31.463154 | orchestrator | 2026-03-28 01:06:31.463164 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-03-28 01:06:31.463169 | orchestrator | Saturday 28 March 2026 01:05:01 +0000 (0:00:00.806) 0:02:17.432 ******** 2026-03-28 01:06:31.463173 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:06:31.463178 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:06:31.463183 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:06:31.463187 | orchestrator | 2026-03-28 01:06:31.463192 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-03-28 01:06:31.463196 | orchestrator | Saturday 28 March 2026 01:05:34 +0000 (0:00:32.523) 0:02:49.955 ******** 2026-03-28 01:06:31.463201 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:06:31.463206 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:06:31.463210 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:06:31.463215 | orchestrator | 2026-03-28 01:06:31.463220 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-03-28 01:06:31.463225 | orchestrator | Saturday 28 March 2026 01:05:47 +0000 (0:00:13.401) 0:03:03.356 ******** 2026-03-28 01:06:31.463233 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:06:31.463238 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:06:31.463242 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:06:31.463247 | orchestrator | 2026-03-28 01:06:31.463251 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-03-28 01:06:31.463256 | orchestrator | Saturday 28 March 2026 01:06:17 +0000 (0:00:30.198) 0:03:33.555 ******** 2026-03-28 01:06:31.463260 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:06:31.463265 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:06:31.463269 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:06:31.463274 | orchestrator | 2026-03-28 01:06:31.463279 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-03-28 01:06:31.463283 | orchestrator | Saturday 28 March 2026 01:06:26 +0000 (0:00:08.844) 0:03:42.399 ******** 2026-03-28 01:06:31.463288 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:06:31.463292 | orchestrator | 2026-03-28 01:06:31.463297 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:06:31.463302 | orchestrator | testbed-node-0 : ok=33  changed=24  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-28 01:06:31.463307 | orchestrator | testbed-node-1 : ok=24  changed=17  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-28 01:06:31.463312 | orchestrator | testbed-node-2 : ok=24  changed=17  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-28 01:06:31.463317 | orchestrator | 2026-03-28 01:06:31.463321 | orchestrator | 2026-03-28 01:06:31.463326 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:06:31.463330 | orchestrator | Saturday 28 March 2026 01:06:27 +0000 (0:00:01.113) 0:03:43.513 ******** 2026-03-28 01:06:31.463335 | orchestrator | =============================================================================== 2026-03-28 01:06:31.463339 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 32.52s 2026-03-28 01:06:31.463344 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 30.20s 2026-03-28 01:06:31.463348 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 20.88s 2026-03-28 01:06:31.463353 | orchestrator | service-ks-register : cinder | Creating/deleting endpoints ------------- 15.02s 2026-03-28 01:06:31.463357 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 13.40s 2026-03-28 01:06:31.463365 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 12.06s 2026-03-28 01:06:31.463370 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 8.84s 2026-03-28 01:06:31.463375 | orchestrator | service-ks-register : cinder | Granting/revoking user roles ------------- 8.37s 2026-03-28 01:06:31.463380 | orchestrator | service-ks-register : cinder | Creating/deleting services --------------- 7.78s 2026-03-28 01:06:31.463385 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 7.12s 2026-03-28 01:06:31.463389 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 5.11s 2026-03-28 01:06:31.463394 | orchestrator | cinder : Copying over config.json files for services -------------------- 5.05s 2026-03-28 01:06:31.463398 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.49s 2026-03-28 01:06:31.463403 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.90s 2026-03-28 01:06:31.463408 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.77s 2026-03-28 01:06:31.463412 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 3.69s 2026-03-28 01:06:31.463417 | orchestrator | service-check-containers : cinder | Check containers -------------------- 3.54s 2026-03-28 01:06:31.463421 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.49s 2026-03-28 01:06:31.463426 | orchestrator | service-uwsgi-config : Copying over cinder-api uWSGI config ------------- 2.92s 2026-03-28 01:06:31.463435 | orchestrator | cinder : Ensuring cinder service ceph config subdirs exists ------------- 2.54s 2026-03-28 01:06:31.463439 | orchestrator | 2026-03-28 01:06:31 | INFO  | Task 5bf525cb-090a-4924-8e53-9d3d44173427 is in state STARTED 2026-03-28 01:06:31.463534 | orchestrator | 2026-03-28 01:06:31 | INFO  | Task 42e1ff8c-99ad-496f-92c4-2816b63e9602 is in state STARTED 2026-03-28 01:06:31.464133 | orchestrator | 2026-03-28 01:06:31 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:06:31.464166 | orchestrator | 2026-03-28 01:06:31 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:06:34.503123 | orchestrator | 2026-03-28 01:06:34 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:06:34.508307 | orchestrator | 2026-03-28 01:06:34 | INFO  | Task 5bf525cb-090a-4924-8e53-9d3d44173427 is in state STARTED 2026-03-28 01:06:34.511381 | orchestrator | 2026-03-28 01:06:34 | INFO  | Task 42e1ff8c-99ad-496f-92c4-2816b63e9602 is in state STARTED 2026-03-28 01:06:34.514167 | orchestrator | 2026-03-28 01:06:34 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:06:34.514250 | orchestrator | 2026-03-28 01:06:34 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:06:37.549847 | orchestrator | 2026-03-28 01:06:37 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:06:37.551492 | orchestrator | 2026-03-28 01:06:37 | INFO  | Task 5bf525cb-090a-4924-8e53-9d3d44173427 is in state STARTED 2026-03-28 01:06:37.554669 | orchestrator | 2026-03-28 01:06:37 | INFO  | Task 42e1ff8c-99ad-496f-92c4-2816b63e9602 is in state STARTED 2026-03-28 01:06:37.557515 | orchestrator | 2026-03-28 01:06:37 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:06:37.558367 | orchestrator | 2026-03-28 01:06:37 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:06:40.588035 | orchestrator | 2026-03-28 01:06:40 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:06:40.589565 | orchestrator | 2026-03-28 01:06:40 | INFO  | Task 5bf525cb-090a-4924-8e53-9d3d44173427 is in state STARTED 2026-03-28 01:06:40.591313 | orchestrator | 2026-03-28 01:06:40 | INFO  | Task 42e1ff8c-99ad-496f-92c4-2816b63e9602 is in state STARTED 2026-03-28 01:06:40.592808 | orchestrator | 2026-03-28 01:06:40 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:06:40.592893 | orchestrator | 2026-03-28 01:06:40 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:06:43.671320 | orchestrator | 2026-03-28 01:06:43 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:06:43.673127 | orchestrator | 2026-03-28 01:06:43 | INFO  | Task 5bf525cb-090a-4924-8e53-9d3d44173427 is in state STARTED 2026-03-28 01:06:43.675401 | orchestrator | 2026-03-28 01:06:43 | INFO  | Task 42e1ff8c-99ad-496f-92c4-2816b63e9602 is in state STARTED 2026-03-28 01:06:43.676814 | orchestrator | 2026-03-28 01:06:43 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:06:43.676836 | orchestrator | 2026-03-28 01:06:43 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:06:46.819496 | orchestrator | 2026-03-28 01:06:46 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:06:46.820242 | orchestrator | 2026-03-28 01:06:46 | INFO  | Task 5bf525cb-090a-4924-8e53-9d3d44173427 is in state STARTED 2026-03-28 01:06:46.821129 | orchestrator | 2026-03-28 01:06:46 | INFO  | Task 42e1ff8c-99ad-496f-92c4-2816b63e9602 is in state STARTED 2026-03-28 01:06:46.821710 | orchestrator | 2026-03-28 01:06:46 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:06:46.821840 | orchestrator | 2026-03-28 01:06:46 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:06:49.861258 | orchestrator | 2026-03-28 01:06:49 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:06:49.863420 | orchestrator | 2026-03-28 01:06:49 | INFO  | Task 5bf525cb-090a-4924-8e53-9d3d44173427 is in state STARTED 2026-03-28 01:06:49.866606 | orchestrator | 2026-03-28 01:06:49 | INFO  | Task 42e1ff8c-99ad-496f-92c4-2816b63e9602 is in state STARTED 2026-03-28 01:06:49.870423 | orchestrator | 2026-03-28 01:06:49 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:06:49.870548 | orchestrator | 2026-03-28 01:06:49 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:06:52.915787 | orchestrator | 2026-03-28 01:06:52 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:06:52.917458 | orchestrator | 2026-03-28 01:06:52 | INFO  | Task 5bf525cb-090a-4924-8e53-9d3d44173427 is in state STARTED 2026-03-28 01:06:52.920413 | orchestrator | 2026-03-28 01:06:52 | INFO  | Task 42e1ff8c-99ad-496f-92c4-2816b63e9602 is in state STARTED 2026-03-28 01:06:52.922753 | orchestrator | 2026-03-28 01:06:52 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:06:52.922846 | orchestrator | 2026-03-28 01:06:52 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:06:55.973366 | orchestrator | 2026-03-28 01:06:55 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:06:55.974632 | orchestrator | 2026-03-28 01:06:55 | INFO  | Task 5bf525cb-090a-4924-8e53-9d3d44173427 is in state STARTED 2026-03-28 01:06:55.977509 | orchestrator | 2026-03-28 01:06:55 | INFO  | Task 42e1ff8c-99ad-496f-92c4-2816b63e9602 is in state STARTED 2026-03-28 01:06:55.978454 | orchestrator | 2026-03-28 01:06:55 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:06:55.978485 | orchestrator | 2026-03-28 01:06:55 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:06:59.022605 | orchestrator | 2026-03-28 01:06:59 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:06:59.023572 | orchestrator | 2026-03-28 01:06:59 | INFO  | Task 5bf525cb-090a-4924-8e53-9d3d44173427 is in state STARTED 2026-03-28 01:06:59.024571 | orchestrator | 2026-03-28 01:06:59 | INFO  | Task 42e1ff8c-99ad-496f-92c4-2816b63e9602 is in state STARTED 2026-03-28 01:06:59.026072 | orchestrator | 2026-03-28 01:06:59 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:06:59.026151 | orchestrator | 2026-03-28 01:06:59 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:07:02.069139 | orchestrator | 2026-03-28 01:07:02 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:07:02.070297 | orchestrator | 2026-03-28 01:07:02 | INFO  | Task 5bf525cb-090a-4924-8e53-9d3d44173427 is in state STARTED 2026-03-28 01:07:02.072238 | orchestrator | 2026-03-28 01:07:02 | INFO  | Task 42e1ff8c-99ad-496f-92c4-2816b63e9602 is in state STARTED 2026-03-28 01:07:02.073245 | orchestrator | 2026-03-28 01:07:02 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:07:02.073271 | orchestrator | 2026-03-28 01:07:02 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:07:05.114009 | orchestrator | 2026-03-28 01:07:05 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:07:05.115635 | orchestrator | 2026-03-28 01:07:05 | INFO  | Task 5bf525cb-090a-4924-8e53-9d3d44173427 is in state STARTED 2026-03-28 01:07:05.116494 | orchestrator | 2026-03-28 01:07:05 | INFO  | Task 42e1ff8c-99ad-496f-92c4-2816b63e9602 is in state STARTED 2026-03-28 01:07:05.117676 | orchestrator | 2026-03-28 01:07:05 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:07:05.117824 | orchestrator | 2026-03-28 01:07:05 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:07:08.159721 | orchestrator | 2026-03-28 01:07:08 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:07:08.162464 | orchestrator | 2026-03-28 01:07:08 | INFO  | Task 5bf525cb-090a-4924-8e53-9d3d44173427 is in state STARTED 2026-03-28 01:07:08.162935 | orchestrator | 2026-03-28 01:07:08 | INFO  | Task 42e1ff8c-99ad-496f-92c4-2816b63e9602 is in state STARTED 2026-03-28 01:07:08.163806 | orchestrator | 2026-03-28 01:07:08 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:07:08.163842 | orchestrator | 2026-03-28 01:07:08 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:07:11.202298 | orchestrator | 2026-03-28 01:07:11 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:07:11.204856 | orchestrator | 2026-03-28 01:07:11 | INFO  | Task 5bf525cb-090a-4924-8e53-9d3d44173427 is in state STARTED 2026-03-28 01:07:11.207158 | orchestrator | 2026-03-28 01:07:11 | INFO  | Task 42e1ff8c-99ad-496f-92c4-2816b63e9602 is in state STARTED 2026-03-28 01:07:11.208374 | orchestrator | 2026-03-28 01:07:11 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:07:11.209215 | orchestrator | 2026-03-28 01:07:11 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:07:14.242857 | orchestrator | 2026-03-28 01:07:14 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:07:14.244888 | orchestrator | 2026-03-28 01:07:14 | INFO  | Task 5bf525cb-090a-4924-8e53-9d3d44173427 is in state STARTED 2026-03-28 01:07:14.245799 | orchestrator | 2026-03-28 01:07:14 | INFO  | Task 42e1ff8c-99ad-496f-92c4-2816b63e9602 is in state STARTED 2026-03-28 01:07:14.247345 | orchestrator | 2026-03-28 01:07:14 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:07:14.247424 | orchestrator | 2026-03-28 01:07:14 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:07:17.289317 | orchestrator | 2026-03-28 01:07:17 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:07:17.291937 | orchestrator | 2026-03-28 01:07:17 | INFO  | Task 5bf525cb-090a-4924-8e53-9d3d44173427 is in state STARTED 2026-03-28 01:07:17.296616 | orchestrator | 2026-03-28 01:07:17 | INFO  | Task 42e1ff8c-99ad-496f-92c4-2816b63e9602 is in state STARTED 2026-03-28 01:07:17.300192 | orchestrator | 2026-03-28 01:07:17 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:07:17.300384 | orchestrator | 2026-03-28 01:07:17 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:07:20.341293 | orchestrator | 2026-03-28 01:07:20 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:07:20.341864 | orchestrator | 2026-03-28 01:07:20 | INFO  | Task 5bf525cb-090a-4924-8e53-9d3d44173427 is in state STARTED 2026-03-28 01:07:20.343461 | orchestrator | 2026-03-28 01:07:20 | INFO  | Task 42e1ff8c-99ad-496f-92c4-2816b63e9602 is in state STARTED 2026-03-28 01:07:20.345424 | orchestrator | 2026-03-28 01:07:20 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:07:20.345452 | orchestrator | 2026-03-28 01:07:20 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:07:23.391408 | orchestrator | 2026-03-28 01:07:23 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:07:23.392015 | orchestrator | 2026-03-28 01:07:23 | INFO  | Task 5bf525cb-090a-4924-8e53-9d3d44173427 is in state STARTED 2026-03-28 01:07:23.392726 | orchestrator | 2026-03-28 01:07:23 | INFO  | Task 42e1ff8c-99ad-496f-92c4-2816b63e9602 is in state STARTED 2026-03-28 01:07:23.393712 | orchestrator | 2026-03-28 01:07:23 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:07:23.393773 | orchestrator | 2026-03-28 01:07:23 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:07:26.506388 | orchestrator | 2026-03-28 01:07:26 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:07:26.509482 | orchestrator | 2026-03-28 01:07:26 | INFO  | Task 5bf525cb-090a-4924-8e53-9d3d44173427 is in state STARTED 2026-03-28 01:07:26.511450 | orchestrator | 2026-03-28 01:07:26 | INFO  | Task 42e1ff8c-99ad-496f-92c4-2816b63e9602 is in state STARTED 2026-03-28 01:07:26.513033 | orchestrator | 2026-03-28 01:07:26 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:07:26.513071 | orchestrator | 2026-03-28 01:07:26 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:07:29.554415 | orchestrator | 2026-03-28 01:07:29 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:07:29.555875 | orchestrator | 2026-03-28 01:07:29 | INFO  | Task 5bf525cb-090a-4924-8e53-9d3d44173427 is in state STARTED 2026-03-28 01:07:29.557676 | orchestrator | 2026-03-28 01:07:29 | INFO  | Task 42e1ff8c-99ad-496f-92c4-2816b63e9602 is in state STARTED 2026-03-28 01:07:29.559225 | orchestrator | 2026-03-28 01:07:29 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:07:29.559275 | orchestrator | 2026-03-28 01:07:29 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:07:32.596021 | orchestrator | 2026-03-28 01:07:32 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:07:32.596347 | orchestrator | 2026-03-28 01:07:32 | INFO  | Task 5bf525cb-090a-4924-8e53-9d3d44173427 is in state STARTED 2026-03-28 01:07:32.597814 | orchestrator | 2026-03-28 01:07:32 | INFO  | Task 42e1ff8c-99ad-496f-92c4-2816b63e9602 is in state STARTED 2026-03-28 01:07:32.600229 | orchestrator | 2026-03-28 01:07:32 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:07:32.600286 | orchestrator | 2026-03-28 01:07:32 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:07:35.631418 | orchestrator | 2026-03-28 01:07:35 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:07:35.631540 | orchestrator | 2026-03-28 01:07:35 | INFO  | Task 5bf525cb-090a-4924-8e53-9d3d44173427 is in state STARTED 2026-03-28 01:07:35.632276 | orchestrator | 2026-03-28 01:07:35 | INFO  | Task 42e1ff8c-99ad-496f-92c4-2816b63e9602 is in state STARTED 2026-03-28 01:07:35.634184 | orchestrator | 2026-03-28 01:07:35 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:07:35.634284 | orchestrator | 2026-03-28 01:07:35 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:07:38.680693 | orchestrator | 2026-03-28 01:07:38 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:07:38.681107 | orchestrator | 2026-03-28 01:07:38 | INFO  | Task 5bf525cb-090a-4924-8e53-9d3d44173427 is in state STARTED 2026-03-28 01:07:38.681904 | orchestrator | 2026-03-28 01:07:38 | INFO  | Task 42e1ff8c-99ad-496f-92c4-2816b63e9602 is in state STARTED 2026-03-28 01:07:38.682780 | orchestrator | 2026-03-28 01:07:38 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:07:38.682863 | orchestrator | 2026-03-28 01:07:38 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:07:41.719068 | orchestrator | 2026-03-28 01:07:41 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:07:41.719435 | orchestrator | 2026-03-28 01:07:41 | INFO  | Task 5bf525cb-090a-4924-8e53-9d3d44173427 is in state STARTED 2026-03-28 01:07:41.720260 | orchestrator | 2026-03-28 01:07:41 | INFO  | Task 42e1ff8c-99ad-496f-92c4-2816b63e9602 is in state STARTED 2026-03-28 01:07:41.721523 | orchestrator | 2026-03-28 01:07:41 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:07:41.721550 | orchestrator | 2026-03-28 01:07:41 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:07:44.759641 | orchestrator | 2026-03-28 01:07:44 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:07:44.760219 | orchestrator | 2026-03-28 01:07:44 | INFO  | Task 5bf525cb-090a-4924-8e53-9d3d44173427 is in state STARTED 2026-03-28 01:07:44.761828 | orchestrator | 2026-03-28 01:07:44 | INFO  | Task 42e1ff8c-99ad-496f-92c4-2816b63e9602 is in state STARTED 2026-03-28 01:07:44.762878 | orchestrator | 2026-03-28 01:07:44 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:07:44.762925 | orchestrator | 2026-03-28 01:07:44 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:07:47.790556 | orchestrator | 2026-03-28 01:07:47 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:07:47.791061 | orchestrator | 2026-03-28 01:07:47 | INFO  | Task 5bf525cb-090a-4924-8e53-9d3d44173427 is in state STARTED 2026-03-28 01:07:47.792119 | orchestrator | 2026-03-28 01:07:47 | INFO  | Task 42e1ff8c-99ad-496f-92c4-2816b63e9602 is in state STARTED 2026-03-28 01:07:47.793229 | orchestrator | 2026-03-28 01:07:47 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:07:47.793331 | orchestrator | 2026-03-28 01:07:47 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:07:50.833484 | orchestrator | 2026-03-28 01:07:50 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:07:50.834088 | orchestrator | 2026-03-28 01:07:50 | INFO  | Task 5bf525cb-090a-4924-8e53-9d3d44173427 is in state STARTED 2026-03-28 01:07:50.835979 | orchestrator | 2026-03-28 01:07:50 | INFO  | Task 42e1ff8c-99ad-496f-92c4-2816b63e9602 is in state STARTED 2026-03-28 01:07:50.836946 | orchestrator | 2026-03-28 01:07:50 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:07:50.836981 | orchestrator | 2026-03-28 01:07:50 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:07:53.875579 | orchestrator | 2026-03-28 01:07:53 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:07:53.877021 | orchestrator | 2026-03-28 01:07:53 | INFO  | Task 5bf525cb-090a-4924-8e53-9d3d44173427 is in state STARTED 2026-03-28 01:07:53.878422 | orchestrator | 2026-03-28 01:07:53 | INFO  | Task 42e1ff8c-99ad-496f-92c4-2816b63e9602 is in state STARTED 2026-03-28 01:07:53.880453 | orchestrator | 2026-03-28 01:07:53 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:07:53.880513 | orchestrator | 2026-03-28 01:07:53 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:07:56.920019 | orchestrator | 2026-03-28 01:07:56 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:07:56.922534 | orchestrator | 2026-03-28 01:07:56 | INFO  | Task 5bf525cb-090a-4924-8e53-9d3d44173427 is in state STARTED 2026-03-28 01:07:56.924293 | orchestrator | 2026-03-28 01:07:56 | INFO  | Task 42e1ff8c-99ad-496f-92c4-2816b63e9602 is in state STARTED 2026-03-28 01:07:56.928024 | orchestrator | 2026-03-28 01:07:56 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:07:56.928897 | orchestrator | 2026-03-28 01:07:56 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:07:59.965324 | orchestrator | 2026-03-28 01:07:59 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:07:59.966575 | orchestrator | 2026-03-28 01:07:59 | INFO  | Task 5bf525cb-090a-4924-8e53-9d3d44173427 is in state STARTED 2026-03-28 01:07:59.969448 | orchestrator | 2026-03-28 01:07:59 | INFO  | Task 42e1ff8c-99ad-496f-92c4-2816b63e9602 is in state STARTED 2026-03-28 01:07:59.970392 | orchestrator | 2026-03-28 01:07:59 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:07:59.970422 | orchestrator | 2026-03-28 01:07:59 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:08:03.013092 | orchestrator | 2026-03-28 01:08:03 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:08:03.014910 | orchestrator | 2026-03-28 01:08:03 | INFO  | Task 5bf525cb-090a-4924-8e53-9d3d44173427 is in state STARTED 2026-03-28 01:08:03.017427 | orchestrator | 2026-03-28 01:08:03 | INFO  | Task 42e1ff8c-99ad-496f-92c4-2816b63e9602 is in state STARTED 2026-03-28 01:08:03.018492 | orchestrator | 2026-03-28 01:08:03 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:08:03.018544 | orchestrator | 2026-03-28 01:08:03 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:08:06.063495 | orchestrator | 2026-03-28 01:08:06 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:08:06.065826 | orchestrator | 2026-03-28 01:08:06 | INFO  | Task 5bf525cb-090a-4924-8e53-9d3d44173427 is in state STARTED 2026-03-28 01:08:06.067474 | orchestrator | 2026-03-28 01:08:06 | INFO  | Task 42e1ff8c-99ad-496f-92c4-2816b63e9602 is in state STARTED 2026-03-28 01:08:06.069003 | orchestrator | 2026-03-28 01:08:06 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:08:06.069745 | orchestrator | 2026-03-28 01:08:06 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:08:09.099073 | orchestrator | 2026-03-28 01:08:09 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:08:09.100678 | orchestrator | 2026-03-28 01:08:09 | INFO  | Task 5bf525cb-090a-4924-8e53-9d3d44173427 is in state STARTED 2026-03-28 01:08:09.101370 | orchestrator | 2026-03-28 01:08:09 | INFO  | Task 42e1ff8c-99ad-496f-92c4-2816b63e9602 is in state STARTED 2026-03-28 01:08:09.102477 | orchestrator | 2026-03-28 01:08:09 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:08:09.102521 | orchestrator | 2026-03-28 01:08:09 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:08:12.142267 | orchestrator | 2026-03-28 01:08:12 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:08:12.142324 | orchestrator | 2026-03-28 01:08:12 | INFO  | Task 5bf525cb-090a-4924-8e53-9d3d44173427 is in state STARTED 2026-03-28 01:08:12.143843 | orchestrator | 2026-03-28 01:08:12 | INFO  | Task 42e1ff8c-99ad-496f-92c4-2816b63e9602 is in state STARTED 2026-03-28 01:08:12.144501 | orchestrator | 2026-03-28 01:08:12 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:08:12.144524 | orchestrator | 2026-03-28 01:08:12 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:08:15.191627 | orchestrator | 2026-03-28 01:08:15 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:08:15.191786 | orchestrator | 2026-03-28 01:08:15 | INFO  | Task 5bf525cb-090a-4924-8e53-9d3d44173427 is in state STARTED 2026-03-28 01:08:15.193290 | orchestrator | 2026-03-28 01:08:15.193487 | orchestrator | 2026-03-28 01:08:15 | INFO  | Task 42e1ff8c-99ad-496f-92c4-2816b63e9602 is in state SUCCESS 2026-03-28 01:08:15.194540 | orchestrator | 2026-03-28 01:08:15.194853 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 01:08:15.194868 | orchestrator | 2026-03-28 01:08:15.194877 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 01:08:15.194885 | orchestrator | Saturday 28 March 2026 01:06:00 +0000 (0:00:00.644) 0:00:00.645 ******** 2026-03-28 01:08:15.194893 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:08:15.194901 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:08:15.194909 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:08:15.194918 | orchestrator | 2026-03-28 01:08:15.194932 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 01:08:15.194945 | orchestrator | Saturday 28 March 2026 01:06:01 +0000 (0:00:00.447) 0:00:01.092 ******** 2026-03-28 01:08:15.194959 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-03-28 01:08:15.194972 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-03-28 01:08:15.194985 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-03-28 01:08:15.194999 | orchestrator | 2026-03-28 01:08:15.195115 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-03-28 01:08:15.195128 | orchestrator | 2026-03-28 01:08:15.195138 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-28 01:08:15.195148 | orchestrator | Saturday 28 March 2026 01:06:01 +0000 (0:00:00.359) 0:00:01.452 ******** 2026-03-28 01:08:15.195159 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:08:15.195174 | orchestrator | 2026-03-28 01:08:15.195201 | orchestrator | TASK [service-ks-register : barbican | Creating/deleting services] ************* 2026-03-28 01:08:15.195216 | orchestrator | Saturday 28 March 2026 01:06:02 +0000 (0:00:01.079) 0:00:02.532 ******** 2026-03-28 01:08:15.195231 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-03-28 01:08:15.195244 | orchestrator | 2026-03-28 01:08:15.195258 | orchestrator | TASK [service-ks-register : barbican | Creating/deleting endpoints] ************ 2026-03-28 01:08:15.195271 | orchestrator | Saturday 28 March 2026 01:06:07 +0000 (0:00:05.199) 0:00:07.731 ******** 2026-03-28 01:08:15.195283 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-03-28 01:08:15.195291 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-03-28 01:08:15.195299 | orchestrator | 2026-03-28 01:08:15.195307 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-03-28 01:08:15.195315 | orchestrator | Saturday 28 March 2026 01:06:14 +0000 (0:00:06.378) 0:00:14.110 ******** 2026-03-28 01:08:15.195323 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-28 01:08:15.195341 | orchestrator | 2026-03-28 01:08:15.195349 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-03-28 01:08:15.195357 | orchestrator | Saturday 28 March 2026 01:06:17 +0000 (0:00:03.568) 0:00:17.678 ******** 2026-03-28 01:08:15.195364 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-03-28 01:08:15.195372 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-28 01:08:15.195380 | orchestrator | 2026-03-28 01:08:15.195388 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-03-28 01:08:15.195395 | orchestrator | Saturday 28 March 2026 01:06:22 +0000 (0:00:04.718) 0:00:22.397 ******** 2026-03-28 01:08:15.195403 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-28 01:08:15.195411 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-03-28 01:08:15.195433 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-03-28 01:08:15.195441 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-03-28 01:08:15.195500 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-03-28 01:08:15.195509 | orchestrator | 2026-03-28 01:08:15.195517 | orchestrator | TASK [service-ks-register : barbican | Granting/revoking user roles] *********** 2026-03-28 01:08:15.195525 | orchestrator | Saturday 28 March 2026 01:06:41 +0000 (0:00:19.156) 0:00:41.554 ******** 2026-03-28 01:08:15.195566 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-03-28 01:08:15.195574 | orchestrator | 2026-03-28 01:08:15.195582 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-03-28 01:08:15.195590 | orchestrator | Saturday 28 March 2026 01:06:46 +0000 (0:00:04.737) 0:00:46.291 ******** 2026-03-28 01:08:15.195604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:08:15.195631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:08:15.195647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-28 01:08:15.195657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:08:15.195804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-28 01:08:15.195818 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:08:15.195974 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-28 01:08:15.195992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:08:15.196002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:08:15.196016 | orchestrator | 2026-03-28 01:08:15.196035 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-03-28 01:08:15.196050 | orchestrator | Saturday 28 March 2026 01:06:48 +0000 (0:00:02.121) 0:00:48.412 ******** 2026-03-28 01:08:15.196109 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-03-28 01:08:15.196126 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-03-28 01:08:15.196179 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-03-28 01:08:15.196195 | orchestrator | 2026-03-28 01:08:15.196208 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-03-28 01:08:15.196221 | orchestrator | Saturday 28 March 2026 01:06:49 +0000 (0:00:01.174) 0:00:49.587 ******** 2026-03-28 01:08:15.196234 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:08:15.196248 | orchestrator | 2026-03-28 01:08:15.196262 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-03-28 01:08:15.196275 | orchestrator | Saturday 28 March 2026 01:06:49 +0000 (0:00:00.143) 0:00:49.730 ******** 2026-03-28 01:08:15.196284 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:08:15.196292 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:08:15.196299 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:08:15.196307 | orchestrator | 2026-03-28 01:08:15.196315 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-28 01:08:15.196323 | orchestrator | Saturday 28 March 2026 01:06:50 +0000 (0:00:00.328) 0:00:50.059 ******** 2026-03-28 01:08:15.196331 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:08:15.196338 | orchestrator | 2026-03-28 01:08:15.196346 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-03-28 01:08:15.196354 | orchestrator | Saturday 28 March 2026 01:06:50 +0000 (0:00:00.796) 0:00:50.855 ******** 2026-03-28 01:08:15.196363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:08:15.196381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:08:15.196396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:08:15.196413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-28 01:08:15.196422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-28 01:08:15.196430 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-28 01:08:15.196443 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:08:15.196455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:08:15.196469 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:08:15.196477 | orchestrator | 2026-03-28 01:08:15.196485 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-03-28 01:08:15.196493 | orchestrator | Saturday 28 March 2026 01:06:54 +0000 (0:00:03.612) 0:00:54.468 ******** 2026-03-28 01:08:15.196501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:08:15.196510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-28 01:08:15.196523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:08:15.196531 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:08:15.196544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:08:15.196557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-28 01:08:15.196566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:08:15.196574 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:08:15.196582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:08:15.196595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-28 01:08:15.196603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:08:15.196636 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:08:15.196651 | orchestrator | 2026-03-28 01:08:15.196664 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-03-28 01:08:15.196695 | orchestrator | Saturday 28 March 2026 01:06:55 +0000 (0:00:00.749) 0:00:55.217 ******** 2026-03-28 01:08:15.196723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:08:15.196740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-28 01:08:15.196754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:08:15.196768 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:08:15.196790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:08:15.196806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-28 01:08:15.196836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:08:15.196851 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:08:15.196866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:08:15.196880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-28 01:08:15.196894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:08:15.196909 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:08:15.196923 | orchestrator | 2026-03-28 01:08:15.196938 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-03-28 01:08:15.196951 | orchestrator | Saturday 28 March 2026 01:06:56 +0000 (0:00:00.968) 0:00:56.185 ******** 2026-03-28 01:08:15.197285 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:08:15.197324 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:08:15.197341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:08:15.197356 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-28 01:08:15.197377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-28 01:08:15.197990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-28 01:08:15.198060 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:08:15.198072 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:08:15.198081 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:08:15.198089 | orchestrator | 2026-03-28 01:08:15.198097 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-03-28 01:08:15.198110 | orchestrator | Saturday 28 March 2026 01:06:59 +0000 (0:00:03.561) 0:00:59.747 ******** 2026-03-28 01:08:15.198122 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:08:15.198135 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:08:15.198159 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:08:15.198204 | orchestrator | 2026-03-28 01:08:15.198217 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-03-28 01:08:15.198230 | orchestrator | Saturday 28 March 2026 01:07:01 +0000 (0:00:01.547) 0:01:01.295 ******** 2026-03-28 01:08:15.198243 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 01:08:15.198256 | orchestrator | 2026-03-28 01:08:15.198269 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-03-28 01:08:15.198283 | orchestrator | Saturday 28 March 2026 01:07:02 +0000 (0:00:01.225) 0:01:02.520 ******** 2026-03-28 01:08:15.198319 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:08:15.198328 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:08:15.198336 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:08:15.198377 | orchestrator | 2026-03-28 01:08:15.198387 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-03-28 01:08:15.198395 | orchestrator | Saturday 28 March 2026 01:07:03 +0000 (0:00:00.631) 0:01:03.151 ******** 2026-03-28 01:08:15.198435 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:08:15.198451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:08:15.198460 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:08:15.198469 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-28 01:08:15.198484 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-28 01:08:15.198598 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-28 01:08:15.198660 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:08:15.198717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:08:15.198729 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:08:15.198737 | orchestrator | 2026-03-28 01:08:15.198745 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-03-28 01:08:15.198754 | orchestrator | Saturday 28 March 2026 01:07:11 +0000 (0:00:08.200) 0:01:11.352 ******** 2026-03-28 01:08:15.198762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:08:15.198784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-28 01:08:15.198793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:08:15.198801 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:08:15.198813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:08:15.198848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-28 01:08:15.198858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:08:15.198872 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:08:15.198966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:08:15.198985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-28 01:08:15.198997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:08:15.199006 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:08:15.199014 | orchestrator | 2026-03-28 01:08:15.199022 | orchestrator | TASK [service-check-containers : barbican | Check containers] ****************** 2026-03-28 01:08:15.199030 | orchestrator | Saturday 28 March 2026 01:07:12 +0000 (0:00:00.656) 0:01:12.008 ******** 2026-03-28 01:08:15.199038 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:08:15.199055 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:08:15.199069 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:08:15.199079 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-28 01:08:15.199090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-28 01:08:15.199099 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-28 01:08:15.199114 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:08:15.199122 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:08:15.199134 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:08:15.199143 | orchestrator | 2026-03-28 01:08:15.199151 | orchestrator | TASK [service-check-containers : barbican | Notify handlers to restart containers] *** 2026-03-28 01:08:15.199159 | orchestrator | Saturday 28 March 2026 01:07:15 +0000 (0:00:03.208) 0:01:15.217 ******** 2026-03-28 01:08:15.199167 | orchestrator | changed: [testbed-node-0] => { 2026-03-28 01:08:15.199175 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 01:08:15.199183 | orchestrator | } 2026-03-28 01:08:15.199191 | orchestrator | changed: [testbed-node-1] => { 2026-03-28 01:08:15.199199 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 01:08:15.199207 | orchestrator | } 2026-03-28 01:08:15.199215 | orchestrator | changed: [testbed-node-2] => { 2026-03-28 01:08:15.199223 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 01:08:15.199231 | orchestrator | } 2026-03-28 01:08:15.199239 | orchestrator | 2026-03-28 01:08:15.199247 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-28 01:08:15.199254 | orchestrator | Saturday 28 March 2026 01:07:15 +0000 (0:00:00.485) 0:01:15.702 ******** 2026-03-28 01:08:15.199267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:08:15.199282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-28 01:08:15.199291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:08:15.199299 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:08:15.199312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:08:15.199322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-28 01:08:15.199334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:08:15.199342 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:08:15.199351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:08:15.199365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-28 01:08:15.199382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:08:15.199390 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:08:15.199398 | orchestrator | 2026-03-28 01:08:15.199407 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-28 01:08:15.199415 | orchestrator | Saturday 28 March 2026 01:07:17 +0000 (0:00:01.341) 0:01:17.044 ******** 2026-03-28 01:08:15.199423 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:08:15.199431 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:08:15.199439 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:08:15.199447 | orchestrator | 2026-03-28 01:08:15.199455 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-03-28 01:08:15.199467 | orchestrator | Saturday 28 March 2026 01:07:17 +0000 (0:00:00.401) 0:01:17.446 ******** 2026-03-28 01:08:15.199475 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:08:15.199483 | orchestrator | 2026-03-28 01:08:15.199491 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-03-28 01:08:15.199499 | orchestrator | Saturday 28 March 2026 01:07:20 +0000 (0:00:03.009) 0:01:20.455 ******** 2026-03-28 01:08:15.199590 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:08:15.199601 | orchestrator | 2026-03-28 01:08:15.199609 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-03-28 01:08:15.199617 | orchestrator | Saturday 28 March 2026 01:07:22 +0000 (0:00:02.477) 0:01:22.933 ******** 2026-03-28 01:08:15.199625 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:08:15.199633 | orchestrator | 2026-03-28 01:08:15.199640 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-28 01:08:15.199648 | orchestrator | Saturday 28 March 2026 01:07:38 +0000 (0:00:15.110) 0:01:38.043 ******** 2026-03-28 01:08:15.199663 | orchestrator | 2026-03-28 01:08:15.199671 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-28 01:08:15.199698 | orchestrator | Saturday 28 March 2026 01:07:38 +0000 (0:00:00.155) 0:01:38.199 ******** 2026-03-28 01:08:15.199712 | orchestrator | 2026-03-28 01:08:15.199720 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-28 01:08:15.199728 | orchestrator | Saturday 28 March 2026 01:07:38 +0000 (0:00:00.142) 0:01:38.341 ******** 2026-03-28 01:08:15.199736 | orchestrator | 2026-03-28 01:08:15.199744 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-03-28 01:08:15.199756 | orchestrator | Saturday 28 March 2026 01:07:38 +0000 (0:00:00.159) 0:01:38.500 ******** 2026-03-28 01:08:15.199764 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:08:15.199772 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:08:15.199780 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:08:15.199788 | orchestrator | 2026-03-28 01:08:15.199820 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-03-28 01:08:15.199828 | orchestrator | Saturday 28 March 2026 01:07:48 +0000 (0:00:09.705) 0:01:48.206 ******** 2026-03-28 01:08:15.199836 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:08:15.199844 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:08:15.199852 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:08:15.199860 | orchestrator | 2026-03-28 01:08:15.199868 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-03-28 01:08:15.199876 | orchestrator | Saturday 28 March 2026 01:08:00 +0000 (0:00:12.603) 0:02:00.810 ******** 2026-03-28 01:08:15.199943 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:08:15.199952 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:08:15.199960 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:08:15.199968 | orchestrator | 2026-03-28 01:08:15.199976 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:08:15.200040 | orchestrator | testbed-node-0 : ok=25  changed=19  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-28 01:08:15.200052 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-28 01:08:15.200060 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-28 01:08:15.200068 | orchestrator | 2026-03-28 01:08:15.200075 | orchestrator | 2026-03-28 01:08:15.200083 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:08:15.200462 | orchestrator | Saturday 28 March 2026 01:08:12 +0000 (0:00:11.792) 0:02:12.602 ******** 2026-03-28 01:08:15.200478 | orchestrator | =============================================================================== 2026-03-28 01:08:15.200492 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 19.16s 2026-03-28 01:08:15.200504 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 15.11s 2026-03-28 01:08:15.200517 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 12.60s 2026-03-28 01:08:15.200531 | orchestrator | barbican : Restart barbican-worker container --------------------------- 11.79s 2026-03-28 01:08:15.200545 | orchestrator | barbican : Restart barbican-api container ------------------------------- 9.71s 2026-03-28 01:08:15.200559 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 8.20s 2026-03-28 01:08:15.200572 | orchestrator | service-ks-register : barbican | Creating/deleting endpoints ------------ 6.38s 2026-03-28 01:08:15.200584 | orchestrator | service-ks-register : barbican | Creating/deleting services ------------- 5.20s 2026-03-28 01:08:15.200592 | orchestrator | service-ks-register : barbican | Granting/revoking user roles ----------- 4.74s 2026-03-28 01:08:15.200599 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.72s 2026-03-28 01:08:15.200607 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.61s 2026-03-28 01:08:15.200615 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.57s 2026-03-28 01:08:15.200623 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.56s 2026-03-28 01:08:15.200640 | orchestrator | service-check-containers : barbican | Check containers ------------------ 3.21s 2026-03-28 01:08:15.200648 | orchestrator | barbican : Creating barbican database ----------------------------------- 3.01s 2026-03-28 01:08:15.200656 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.48s 2026-03-28 01:08:15.200663 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.12s 2026-03-28 01:08:15.200671 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 1.55s 2026-03-28 01:08:15.200728 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.34s 2026-03-28 01:08:15.200747 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 1.23s 2026-03-28 01:08:15.200794 | orchestrator | 2026-03-28 01:08:15 | INFO  | Task 3a9b35eb-d52c-4456-bf94-33aae35f18d5 is in state STARTED 2026-03-28 01:08:15.200803 | orchestrator | 2026-03-28 01:08:15 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:08:15.200811 | orchestrator | 2026-03-28 01:08:15 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:08:18.221406 | orchestrator | 2026-03-28 01:08:18 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:08:18.221696 | orchestrator | 2026-03-28 01:08:18 | INFO  | Task 5bf525cb-090a-4924-8e53-9d3d44173427 is in state STARTED 2026-03-28 01:08:18.222173 | orchestrator | 2026-03-28 01:08:18 | INFO  | Task 3a9b35eb-d52c-4456-bf94-33aae35f18d5 is in state STARTED 2026-03-28 01:08:18.223002 | orchestrator | 2026-03-28 01:08:18 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:08:18.223065 | orchestrator | 2026-03-28 01:08:18 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:08:21.247737 | orchestrator | 2026-03-28 01:08:21 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:08:21.248265 | orchestrator | 2026-03-28 01:08:21 | INFO  | Task 5bf525cb-090a-4924-8e53-9d3d44173427 is in state STARTED 2026-03-28 01:08:21.249083 | orchestrator | 2026-03-28 01:08:21 | INFO  | Task 3a9b35eb-d52c-4456-bf94-33aae35f18d5 is in state STARTED 2026-03-28 01:08:21.249775 | orchestrator | 2026-03-28 01:08:21 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:08:21.249824 | orchestrator | 2026-03-28 01:08:21 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:08:24.275504 | orchestrator | 2026-03-28 01:08:24 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:08:24.278651 | orchestrator | 2026-03-28 01:08:24 | INFO  | Task 5bf525cb-090a-4924-8e53-9d3d44173427 is in state STARTED 2026-03-28 01:08:24.279325 | orchestrator | 2026-03-28 01:08:24 | INFO  | Task 3a9b35eb-d52c-4456-bf94-33aae35f18d5 is in state STARTED 2026-03-28 01:08:24.280337 | orchestrator | 2026-03-28 01:08:24 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:08:24.280392 | orchestrator | 2026-03-28 01:08:24 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:08:27.342635 | orchestrator | 2026-03-28 01:08:27 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:08:27.351996 | orchestrator | 2026-03-28 01:08:27 | INFO  | Task 5bf525cb-090a-4924-8e53-9d3d44173427 is in state STARTED 2026-03-28 01:08:27.357230 | orchestrator | 2026-03-28 01:08:27 | INFO  | Task 3a9b35eb-d52c-4456-bf94-33aae35f18d5 is in state STARTED 2026-03-28 01:08:27.364788 | orchestrator | 2026-03-28 01:08:27 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:08:27.364882 | orchestrator | 2026-03-28 01:08:27 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:08:30.407903 | orchestrator | 2026-03-28 01:08:30 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:08:30.412570 | orchestrator | 2026-03-28 01:08:30 | INFO  | Task 5bf525cb-090a-4924-8e53-9d3d44173427 is in state STARTED 2026-03-28 01:08:30.416074 | orchestrator | 2026-03-28 01:08:30 | INFO  | Task 3a9b35eb-d52c-4456-bf94-33aae35f18d5 is in state STARTED 2026-03-28 01:08:30.418387 | orchestrator | 2026-03-28 01:08:30 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:08:30.418448 | orchestrator | 2026-03-28 01:08:30 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:08:33.469812 | orchestrator | 2026-03-28 01:08:33 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:08:33.472349 | orchestrator | 2026-03-28 01:08:33 | INFO  | Task 5bf525cb-090a-4924-8e53-9d3d44173427 is in state STARTED 2026-03-28 01:08:33.473532 | orchestrator | 2026-03-28 01:08:33 | INFO  | Task 3a9b35eb-d52c-4456-bf94-33aae35f18d5 is in state STARTED 2026-03-28 01:08:33.475813 | orchestrator | 2026-03-28 01:08:33 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:08:33.475888 | orchestrator | 2026-03-28 01:08:33 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:08:36.523437 | orchestrator | 2026-03-28 01:08:36 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:08:36.526281 | orchestrator | 2026-03-28 01:08:36 | INFO  | Task 5bf525cb-090a-4924-8e53-9d3d44173427 is in state STARTED 2026-03-28 01:08:36.529357 | orchestrator | 2026-03-28 01:08:36 | INFO  | Task 3a9b35eb-d52c-4456-bf94-33aae35f18d5 is in state STARTED 2026-03-28 01:08:36.530440 | orchestrator | 2026-03-28 01:08:36 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:08:36.530482 | orchestrator | 2026-03-28 01:08:36 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:08:39.589796 | orchestrator | 2026-03-28 01:08:39 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:08:39.591500 | orchestrator | 2026-03-28 01:08:39 | INFO  | Task 5bf525cb-090a-4924-8e53-9d3d44173427 is in state STARTED 2026-03-28 01:08:39.592221 | orchestrator | 2026-03-28 01:08:39 | INFO  | Task 3a9b35eb-d52c-4456-bf94-33aae35f18d5 is in state STARTED 2026-03-28 01:08:39.594374 | orchestrator | 2026-03-28 01:08:39 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:08:39.594439 | orchestrator | 2026-03-28 01:08:39 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:08:42.649870 | orchestrator | 2026-03-28 01:08:42 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:08:42.653120 | orchestrator | 2026-03-28 01:08:42 | INFO  | Task 5bf525cb-090a-4924-8e53-9d3d44173427 is in state STARTED 2026-03-28 01:08:42.654982 | orchestrator | 2026-03-28 01:08:42 | INFO  | Task 3a9b35eb-d52c-4456-bf94-33aae35f18d5 is in state STARTED 2026-03-28 01:08:42.656916 | orchestrator | 2026-03-28 01:08:42 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:08:42.656996 | orchestrator | 2026-03-28 01:08:42 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:08:45.701626 | orchestrator | 2026-03-28 01:08:45 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:08:45.702253 | orchestrator | 2026-03-28 01:08:45 | INFO  | Task 5bf525cb-090a-4924-8e53-9d3d44173427 is in state STARTED 2026-03-28 01:08:45.703196 | orchestrator | 2026-03-28 01:08:45 | INFO  | Task 3a9b35eb-d52c-4456-bf94-33aae35f18d5 is in state STARTED 2026-03-28 01:08:45.704501 | orchestrator | 2026-03-28 01:08:45 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:08:45.704551 | orchestrator | 2026-03-28 01:08:45 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:08:48.748088 | orchestrator | 2026-03-28 01:08:48 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:08:48.749122 | orchestrator | 2026-03-28 01:08:48 | INFO  | Task 5bf525cb-090a-4924-8e53-9d3d44173427 is in state STARTED 2026-03-28 01:08:48.751706 | orchestrator | 2026-03-28 01:08:48 | INFO  | Task 3a9b35eb-d52c-4456-bf94-33aae35f18d5 is in state STARTED 2026-03-28 01:08:48.752303 | orchestrator | 2026-03-28 01:08:48 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:08:48.752348 | orchestrator | 2026-03-28 01:08:48 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:08:51.803789 | orchestrator | 2026-03-28 01:08:51 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:08:51.803903 | orchestrator | 2026-03-28 01:08:51 | INFO  | Task 5bf525cb-090a-4924-8e53-9d3d44173427 is in state STARTED 2026-03-28 01:08:51.805563 | orchestrator | 2026-03-28 01:08:51 | INFO  | Task 3a9b35eb-d52c-4456-bf94-33aae35f18d5 is in state STARTED 2026-03-28 01:08:51.806232 | orchestrator | 2026-03-28 01:08:51 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:08:51.806262 | orchestrator | 2026-03-28 01:08:51 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:08:54.854158 | orchestrator | 2026-03-28 01:08:54 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:08:54.856469 | orchestrator | 2026-03-28 01:08:54 | INFO  | Task 5bf525cb-090a-4924-8e53-9d3d44173427 is in state STARTED 2026-03-28 01:08:54.858618 | orchestrator | 2026-03-28 01:08:54 | INFO  | Task 3a9b35eb-d52c-4456-bf94-33aae35f18d5 is in state STARTED 2026-03-28 01:08:54.860224 | orchestrator | 2026-03-28 01:08:54 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:08:54.861118 | orchestrator | 2026-03-28 01:08:54 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:08:57.903576 | orchestrator | 2026-03-28 01:08:57 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:08:57.907724 | orchestrator | 2026-03-28 01:08:57 | INFO  | Task 5bf525cb-090a-4924-8e53-9d3d44173427 is in state STARTED 2026-03-28 01:08:57.910263 | orchestrator | 2026-03-28 01:08:57 | INFO  | Task 3a9b35eb-d52c-4456-bf94-33aae35f18d5 is in state STARTED 2026-03-28 01:08:57.912757 | orchestrator | 2026-03-28 01:08:57 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:08:57.912823 | orchestrator | 2026-03-28 01:08:57 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:09:00.963012 | orchestrator | 2026-03-28 01:09:00 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:09:00.964816 | orchestrator | 2026-03-28 01:09:00 | INFO  | Task 5bf525cb-090a-4924-8e53-9d3d44173427 is in state STARTED 2026-03-28 01:09:00.965729 | orchestrator | 2026-03-28 01:09:00 | INFO  | Task 3a9b35eb-d52c-4456-bf94-33aae35f18d5 is in state STARTED 2026-03-28 01:09:00.966931 | orchestrator | 2026-03-28 01:09:00 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:09:00.966986 | orchestrator | 2026-03-28 01:09:00 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:09:04.006074 | orchestrator | 2026-03-28 01:09:04 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:09:04.008437 | orchestrator | 2026-03-28 01:09:04 | INFO  | Task 5bf525cb-090a-4924-8e53-9d3d44173427 is in state STARTED 2026-03-28 01:09:04.010508 | orchestrator | 2026-03-28 01:09:04 | INFO  | Task 3a9b35eb-d52c-4456-bf94-33aae35f18d5 is in state STARTED 2026-03-28 01:09:04.014950 | orchestrator | 2026-03-28 01:09:04 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:09:04.015035 | orchestrator | 2026-03-28 01:09:04 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:09:07.057564 | orchestrator | 2026-03-28 01:09:07 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:09:07.059238 | orchestrator | 2026-03-28 01:09:07 | INFO  | Task 5bf525cb-090a-4924-8e53-9d3d44173427 is in state STARTED 2026-03-28 01:09:07.061350 | orchestrator | 2026-03-28 01:09:07 | INFO  | Task 3a9b35eb-d52c-4456-bf94-33aae35f18d5 is in state STARTED 2026-03-28 01:09:07.064752 | orchestrator | 2026-03-28 01:09:07 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state STARTED 2026-03-28 01:09:07.064815 | orchestrator | 2026-03-28 01:09:07 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:09:10.105818 | orchestrator | 2026-03-28 01:09:10 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:09:10.107253 | orchestrator | 2026-03-28 01:09:10 | INFO  | Task 5bf525cb-090a-4924-8e53-9d3d44173427 is in state STARTED 2026-03-28 01:09:10.109532 | orchestrator | 2026-03-28 01:09:10 | INFO  | Task 3a9b35eb-d52c-4456-bf94-33aae35f18d5 is in state STARTED 2026-03-28 01:09:10.112197 | orchestrator | 2026-03-28 01:09:10 | INFO  | Task 2232375e-5828-4d73-a2ac-ed127e98f85a is in state STARTED 2026-03-28 01:09:10.115925 | orchestrator | 2026-03-28 01:09:10 | INFO  | Task 04bfac55-31e9-4a41-b1da-2a60e6c92b53 is in state SUCCESS 2026-03-28 01:09:10.117469 | orchestrator | 2026-03-28 01:09:10.117505 | orchestrator | 2026-03-28 01:09:10.117519 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 01:09:10.117530 | orchestrator | 2026-03-28 01:09:10.117538 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 01:09:10.117572 | orchestrator | Saturday 28 March 2026 01:04:01 +0000 (0:00:00.371) 0:00:00.371 ******** 2026-03-28 01:09:10.117581 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:09:10.117588 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:09:10.117594 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:09:10.117600 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:09:10.117606 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:09:10.117612 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:09:10.117618 | orchestrator | 2026-03-28 01:09:10.117650 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 01:09:10.117657 | orchestrator | Saturday 28 March 2026 01:04:02 +0000 (0:00:00.921) 0:00:01.293 ******** 2026-03-28 01:09:10.117665 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-03-28 01:09:10.117740 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-03-28 01:09:10.117754 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-03-28 01:09:10.117797 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-03-28 01:09:10.117804 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-03-28 01:09:10.117810 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-03-28 01:09:10.117816 | orchestrator | 2026-03-28 01:09:10.117822 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-03-28 01:09:10.117828 | orchestrator | 2026-03-28 01:09:10.117835 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-28 01:09:10.117841 | orchestrator | Saturday 28 March 2026 01:04:03 +0000 (0:00:01.231) 0:00:02.524 ******** 2026-03-28 01:09:10.117848 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 01:09:10.117872 | orchestrator | 2026-03-28 01:09:10.117879 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-03-28 01:09:10.117886 | orchestrator | Saturday 28 March 2026 01:04:05 +0000 (0:00:01.597) 0:00:04.121 ******** 2026-03-28 01:09:10.117892 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:09:10.117898 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:09:10.117904 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:09:10.117910 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:09:10.117916 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:09:10.117927 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:09:10.117999 | orchestrator | 2026-03-28 01:09:10.118007 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-03-28 01:09:10.118014 | orchestrator | Saturday 28 March 2026 01:04:06 +0000 (0:00:01.919) 0:00:06.041 ******** 2026-03-28 01:09:10.118053 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:09:10.118059 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:09:10.118065 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:09:10.118071 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:09:10.118077 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:09:10.118083 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:09:10.118089 | orchestrator | 2026-03-28 01:09:10.118096 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-03-28 01:09:10.118102 | orchestrator | Saturday 28 March 2026 01:04:08 +0000 (0:00:01.820) 0:00:07.862 ******** 2026-03-28 01:09:10.118115 | orchestrator | ok: [testbed-node-0] => { 2026-03-28 01:09:10.118122 | orchestrator |  "changed": false, 2026-03-28 01:09:10.118128 | orchestrator |  "msg": "All assertions passed" 2026-03-28 01:09:10.118149 | orchestrator | } 2026-03-28 01:09:10.118157 | orchestrator | ok: [testbed-node-1] => { 2026-03-28 01:09:10.118163 | orchestrator |  "changed": false, 2026-03-28 01:09:10.118169 | orchestrator |  "msg": "All assertions passed" 2026-03-28 01:09:10.118176 | orchestrator | } 2026-03-28 01:09:10.118182 | orchestrator | ok: [testbed-node-2] => { 2026-03-28 01:09:10.118196 | orchestrator |  "changed": false, 2026-03-28 01:09:10.118203 | orchestrator |  "msg": "All assertions passed" 2026-03-28 01:09:10.118209 | orchestrator | } 2026-03-28 01:09:10.118215 | orchestrator | ok: [testbed-node-3] => { 2026-03-28 01:09:10.118241 | orchestrator |  "changed": false, 2026-03-28 01:09:10.118248 | orchestrator |  "msg": "All assertions passed" 2026-03-28 01:09:10.118254 | orchestrator | } 2026-03-28 01:09:10.118260 | orchestrator | ok: [testbed-node-4] => { 2026-03-28 01:09:10.118266 | orchestrator |  "changed": false, 2026-03-28 01:09:10.118272 | orchestrator |  "msg": "All assertions passed" 2026-03-28 01:09:10.118279 | orchestrator | } 2026-03-28 01:09:10.118285 | orchestrator | ok: [testbed-node-5] => { 2026-03-28 01:09:10.118291 | orchestrator |  "changed": false, 2026-03-28 01:09:10.118297 | orchestrator |  "msg": "All assertions passed" 2026-03-28 01:09:10.118303 | orchestrator | } 2026-03-28 01:09:10.118310 | orchestrator | 2026-03-28 01:09:10.118316 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-03-28 01:09:10.118322 | orchestrator | Saturday 28 March 2026 01:04:09 +0000 (0:00:00.910) 0:00:08.773 ******** 2026-03-28 01:09:10.118328 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:10.118334 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:09:10.118341 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:09:10.118347 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:09:10.118353 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:09:10.118359 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:09:10.118365 | orchestrator | 2026-03-28 01:09:10.118372 | orchestrator | TASK [service-ks-register : neutron | Creating/deleting services] ************** 2026-03-28 01:09:10.118378 | orchestrator | Saturday 28 March 2026 01:04:10 +0000 (0:00:01.000) 0:00:09.774 ******** 2026-03-28 01:09:10.118384 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-03-28 01:09:10.118390 | orchestrator | 2026-03-28 01:09:10.118397 | orchestrator | TASK [service-ks-register : neutron | Creating/deleting endpoints] ************* 2026-03-28 01:09:10.118410 | orchestrator | Saturday 28 March 2026 01:04:14 +0000 (0:00:03.954) 0:00:13.728 ******** 2026-03-28 01:09:10.118416 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-03-28 01:09:10.118423 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-03-28 01:09:10.118429 | orchestrator | 2026-03-28 01:09:10.118443 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-03-28 01:09:10.118451 | orchestrator | Saturday 28 March 2026 01:04:22 +0000 (0:00:07.629) 0:00:21.358 ******** 2026-03-28 01:09:10.118462 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-28 01:09:10.118472 | orchestrator | 2026-03-28 01:09:10.118482 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-03-28 01:09:10.118492 | orchestrator | Saturday 28 March 2026 01:04:26 +0000 (0:00:03.695) 0:00:25.054 ******** 2026-03-28 01:09:10.118503 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-03-28 01:09:10.118513 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-28 01:09:10.118525 | orchestrator | 2026-03-28 01:09:10.118535 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-03-28 01:09:10.118546 | orchestrator | Saturday 28 March 2026 01:04:30 +0000 (0:00:04.427) 0:00:29.482 ******** 2026-03-28 01:09:10.118554 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-28 01:09:10.118561 | orchestrator | 2026-03-28 01:09:10.118567 | orchestrator | TASK [service-ks-register : neutron | Granting/revoking user roles] ************ 2026-03-28 01:09:10.118573 | orchestrator | Saturday 28 March 2026 01:04:34 +0000 (0:00:04.164) 0:00:33.646 ******** 2026-03-28 01:09:10.118579 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-03-28 01:09:10.118585 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-03-28 01:09:10.118591 | orchestrator | 2026-03-28 01:09:10.118597 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-28 01:09:10.118604 | orchestrator | Saturday 28 March 2026 01:04:42 +0000 (0:00:08.102) 0:00:41.749 ******** 2026-03-28 01:09:10.118610 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:10.118616 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:09:10.118641 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:09:10.118652 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:09:10.118662 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:09:10.118672 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:09:10.118683 | orchestrator | 2026-03-28 01:09:10.118693 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-03-28 01:09:10.118703 | orchestrator | Saturday 28 March 2026 01:04:43 +0000 (0:00:00.639) 0:00:42.388 ******** 2026-03-28 01:09:10.118714 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:10.118721 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:09:10.118727 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:09:10.118733 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:09:10.118740 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:09:10.118746 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:09:10.118764 | orchestrator | 2026-03-28 01:09:10.118771 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-03-28 01:09:10.118777 | orchestrator | Saturday 28 March 2026 01:04:45 +0000 (0:00:02.250) 0:00:44.639 ******** 2026-03-28 01:09:10.118786 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:09:10.118797 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:09:10.118807 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:09:10.118817 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:09:10.118827 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:09:10.118838 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:09:10.118848 | orchestrator | 2026-03-28 01:09:10.118859 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-28 01:09:10.118870 | orchestrator | Saturday 28 March 2026 01:04:46 +0000 (0:00:00.928) 0:00:45.567 ******** 2026-03-28 01:09:10.118890 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:10.118900 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:09:10.118910 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:09:10.118916 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:09:10.118923 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:09:10.118929 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:09:10.118935 | orchestrator | 2026-03-28 01:09:10.118941 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-03-28 01:09:10.118952 | orchestrator | Saturday 28 March 2026 01:04:48 +0000 (0:00:02.332) 0:00:47.900 ******** 2026-03-28 01:09:10.118961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:09:10.118986 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-28 01:09:10.118995 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:09:10.119002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:09:10.119017 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-28 01:09:10.119025 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-28 01:09:10.119031 | orchestrator | 2026-03-28 01:09:10.119038 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-03-28 01:09:10.119044 | orchestrator | Saturday 28 March 2026 01:04:51 +0000 (0:00:02.629) 0:00:50.529 ******** 2026-03-28 01:09:10.119050 | orchestrator | [WARNING]: Skipped 2026-03-28 01:09:10.119057 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-03-28 01:09:10.119068 | orchestrator | due to this access issue: 2026-03-28 01:09:10.119075 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-03-28 01:09:10.119081 | orchestrator | a directory 2026-03-28 01:09:10.119087 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 01:09:10.119093 | orchestrator | 2026-03-28 01:09:10.119099 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-28 01:09:10.119105 | orchestrator | Saturday 28 March 2026 01:04:52 +0000 (0:00:00.924) 0:00:51.454 ******** 2026-03-28 01:09:10.119112 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 01:09:10.119119 | orchestrator | 2026-03-28 01:09:10.119125 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-03-28 01:09:10.119131 | orchestrator | Saturday 28 March 2026 01:04:53 +0000 (0:00:01.385) 0:00:52.839 ******** 2026-03-28 01:09:10.119138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:09:10.119153 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:09:10.119160 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:09:10.119171 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-28 01:09:10.119178 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-28 01:09:10.119185 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-28 01:09:10.119196 | orchestrator | 2026-03-28 01:09:10.119202 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-03-28 01:09:10.119208 | orchestrator | Saturday 28 March 2026 01:04:57 +0000 (0:00:03.958) 0:00:56.797 ******** 2026-03-28 01:09:10.119217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:09:10.119224 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:09:10.119231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:09:10.119238 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:09:10.119249 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:09:10.119256 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:09:10.119263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:09:10.119278 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:10.119288 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:09:10.119295 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:09:10.119301 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:09:10.119308 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:09:10.119314 | orchestrator | 2026-03-28 01:09:10.119320 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-03-28 01:09:10.119327 | orchestrator | Saturday 28 March 2026 01:05:02 +0000 (0:00:04.365) 0:01:01.163 ******** 2026-03-28 01:09:10.119338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:09:10.119344 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:10.119351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:09:10.119361 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:09:10.119368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:09:10.119378 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:09:10.119385 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:09:10.119392 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:09:10.119402 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:09:10.119409 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:09:10.119415 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:09:10.119426 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:09:10.119433 | orchestrator | 2026-03-28 01:09:10.119439 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-03-28 01:09:10.119445 | orchestrator | Saturday 28 March 2026 01:05:07 +0000 (0:00:05.885) 0:01:07.048 ******** 2026-03-28 01:09:10.119455 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:09:10.119466 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:10.119477 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:09:10.119488 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:09:10.119498 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:09:10.119510 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:09:10.119520 | orchestrator | 2026-03-28 01:09:10.119531 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-03-28 01:09:10.119538 | orchestrator | Saturday 28 March 2026 01:05:11 +0000 (0:00:03.865) 0:01:10.913 ******** 2026-03-28 01:09:10.119544 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:10.119550 | orchestrator | 2026-03-28 01:09:10.119556 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-03-28 01:09:10.119562 | orchestrator | Saturday 28 March 2026 01:05:12 +0000 (0:00:00.362) 0:01:11.276 ******** 2026-03-28 01:09:10.119568 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:10.119574 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:09:10.119580 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:09:10.119586 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:09:10.119593 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:09:10.119598 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:09:10.119605 | orchestrator | 2026-03-28 01:09:10.119611 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-03-28 01:09:10.119617 | orchestrator | Saturday 28 March 2026 01:05:12 +0000 (0:00:00.670) 0:01:11.946 ******** 2026-03-28 01:09:10.119641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:09:10.119649 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:10.119656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:09:10.119673 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:09:10.119680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:09:10.119686 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:09:10.119693 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:09:10.119699 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:09:10.119709 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:09:10.119716 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:09:10.119722 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:09:10.119733 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:09:10.119739 | orchestrator | 2026-03-28 01:09:10.119745 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-03-28 01:09:10.119752 | orchestrator | Saturday 28 March 2026 01:05:16 +0000 (0:00:04.037) 0:01:15.984 ******** 2026-03-28 01:09:10.119762 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:09:10.119769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:09:10.119776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:09:10.119786 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-28 01:09:10.119797 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-28 01:09:10.119808 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-28 01:09:10.119815 | orchestrator | 2026-03-28 01:09:10.119821 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-03-28 01:09:10.119827 | orchestrator | Saturday 28 March 2026 01:05:21 +0000 (0:00:04.783) 0:01:20.767 ******** 2026-03-28 01:09:10.119834 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-28 01:09:10.119841 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-28 01:09:10.119850 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-28 01:09:10.119864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:09:10.119871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:09:10.119878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:09:10.119884 | orchestrator | 2026-03-28 01:09:10.119891 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-03-28 01:09:10.119902 | orchestrator | Saturday 28 March 2026 01:05:29 +0000 (0:00:08.087) 0:01:28.855 ******** 2026-03-28 01:09:10.119917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:09:10.119934 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:09:10.119952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:09:10.119964 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:10.119976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:09:10.119988 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:09:10.119998 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:09:10.120004 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:09:10.120016 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:09:10.120028 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:09:10.120035 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:09:10.120041 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:09:10.120047 | orchestrator | 2026-03-28 01:09:10.120054 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-03-28 01:09:10.120060 | orchestrator | Saturday 28 March 2026 01:05:33 +0000 (0:00:03.950) 0:01:32.806 ******** 2026-03-28 01:09:10.120066 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:09:10.120072 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:09:10.120078 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:09:10.120085 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:09:10.120091 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:09:10.120097 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:09:10.120103 | orchestrator | 2026-03-28 01:09:10.120110 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-03-28 01:09:10.120120 | orchestrator | Saturday 28 March 2026 01:05:38 +0000 (0:00:05.169) 0:01:37.975 ******** 2026-03-28 01:09:10.120127 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:09:10.120133 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:09:10.120140 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:09:10.120146 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:09:10.120153 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:09:10.120163 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:09:10.120172 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:09:10.120183 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:09:10.120190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:09:10.120197 | orchestrator | 2026-03-28 01:09:10.120204 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-03-28 01:09:10.120211 | orchestrator | Saturday 28 March 2026 01:05:43 +0000 (0:00:04.502) 0:01:42.478 ******** 2026-03-28 01:09:10.120217 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:10.120223 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:09:10.120229 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:09:10.120236 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:09:10.120248 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:09:10.120254 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:09:10.120261 | orchestrator | 2026-03-28 01:09:10.120267 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-03-28 01:09:10.120273 | orchestrator | Saturday 28 March 2026 01:05:45 +0000 (0:00:02.279) 0:01:44.757 ******** 2026-03-28 01:09:10.120279 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:09:10.120286 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:10.120292 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:09:10.120298 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:09:10.120304 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:09:10.120310 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:09:10.120317 | orchestrator | 2026-03-28 01:09:10.120323 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-03-28 01:09:10.120329 | orchestrator | Saturday 28 March 2026 01:05:48 +0000 (0:00:02.404) 0:01:47.162 ******** 2026-03-28 01:09:10.120336 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:10.120342 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:09:10.120348 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:09:10.120354 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:09:10.120360 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:09:10.120366 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:09:10.120373 | orchestrator | 2026-03-28 01:09:10.120379 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-03-28 01:09:10.120388 | orchestrator | Saturday 28 March 2026 01:05:51 +0000 (0:00:03.525) 0:01:50.687 ******** 2026-03-28 01:09:10.120394 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:10.120401 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:09:10.120407 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:09:10.120413 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:09:10.120419 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:09:10.120425 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:09:10.120432 | orchestrator | 2026-03-28 01:09:10.120438 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-03-28 01:09:10.120445 | orchestrator | Saturday 28 March 2026 01:05:56 +0000 (0:00:04.829) 0:01:55.517 ******** 2026-03-28 01:09:10.120454 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:10.120465 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:09:10.120475 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:09:10.120484 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:09:10.120495 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:09:10.120506 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:09:10.120517 | orchestrator | 2026-03-28 01:09:10.120527 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-03-28 01:09:10.120533 | orchestrator | Saturday 28 March 2026 01:05:58 +0000 (0:00:02.439) 0:01:57.956 ******** 2026-03-28 01:09:10.120539 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-28 01:09:10.120545 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:10.120552 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-28 01:09:10.120568 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:09:10.120574 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-28 01:09:10.120586 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:09:10.120593 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-28 01:09:10.120599 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:09:10.120605 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-28 01:09:10.120612 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:09:10.120664 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-28 01:09:10.120679 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:09:10.120685 | orchestrator | 2026-03-28 01:09:10.120691 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-03-28 01:09:10.120698 | orchestrator | Saturday 28 March 2026 01:06:01 +0000 (0:00:02.590) 0:02:00.546 ******** 2026-03-28 01:09:10.120705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:09:10.120712 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:09:10.120719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:09:10.120725 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:10.120735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:09:10.120742 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:09:10.120752 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:09:10.120763 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:09:10.120770 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:09:10.120776 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:09:10.120783 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:09:10.120789 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:09:10.120795 | orchestrator | 2026-03-28 01:09:10.120801 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-03-28 01:09:10.120808 | orchestrator | Saturday 28 March 2026 01:06:03 +0000 (0:00:02.337) 0:02:02.883 ******** 2026-03-28 01:09:10.120817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:09:10.120824 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:10.120831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:09:10.120842 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:09:10.120853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:09:10.120860 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:09:10.120866 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:09:10.120873 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:09:10.120879 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:09:10.120885 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:09:10.120895 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:09:10.120902 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:09:10.120912 | orchestrator | 2026-03-28 01:09:10.120919 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-03-28 01:09:10.120925 | orchestrator | Saturday 28 March 2026 01:06:06 +0000 (0:00:02.268) 0:02:05.152 ******** 2026-03-28 01:09:10.120931 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:10.120937 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:09:10.120944 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:09:10.120950 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:09:10.120956 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:09:10.120962 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:09:10.120968 | orchestrator | 2026-03-28 01:09:10.120974 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-03-28 01:09:10.120981 | orchestrator | Saturday 28 March 2026 01:06:08 +0000 (0:00:02.549) 0:02:07.701 ******** 2026-03-28 01:09:10.120987 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:09:10.120993 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:10.121000 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:09:10.121012 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:09:10.121022 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:09:10.121032 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:09:10.121042 | orchestrator | 2026-03-28 01:09:10.121117 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-03-28 01:09:10.121127 | orchestrator | Saturday 28 March 2026 01:06:14 +0000 (0:00:05.836) 0:02:13.538 ******** 2026-03-28 01:09:10.121133 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:10.121140 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:09:10.121146 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:09:10.121152 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:09:10.121158 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:09:10.121164 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:09:10.121170 | orchestrator | 2026-03-28 01:09:10.121177 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-03-28 01:09:10.121183 | orchestrator | Saturday 28 March 2026 01:06:16 +0000 (0:00:02.350) 0:02:15.888 ******** 2026-03-28 01:09:10.121189 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:09:10.121195 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:10.121201 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:09:10.121207 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:09:10.121213 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:09:10.121219 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:09:10.121225 | orchestrator | 2026-03-28 01:09:10.121232 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-03-28 01:09:10.121238 | orchestrator | Saturday 28 March 2026 01:06:20 +0000 (0:00:03.242) 0:02:19.130 ******** 2026-03-28 01:09:10.121244 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:10.121250 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:09:10.121256 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:09:10.121262 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:09:10.121268 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:09:10.121274 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:09:10.121280 | orchestrator | 2026-03-28 01:09:10.121287 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-03-28 01:09:10.121293 | orchestrator | Saturday 28 March 2026 01:06:23 +0000 (0:00:03.083) 0:02:22.213 ******** 2026-03-28 01:09:10.121299 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:10.121305 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:09:10.121311 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:09:10.121317 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:09:10.121323 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:09:10.121329 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:09:10.121335 | orchestrator | 2026-03-28 01:09:10.121341 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-03-28 01:09:10.121348 | orchestrator | Saturday 28 March 2026 01:06:25 +0000 (0:00:02.637) 0:02:24.851 ******** 2026-03-28 01:09:10.121360 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:09:10.121366 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:10.121372 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:09:10.121378 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:09:10.121384 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:09:10.121390 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:09:10.121396 | orchestrator | 2026-03-28 01:09:10.121403 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-03-28 01:09:10.121409 | orchestrator | Saturday 28 March 2026 01:06:28 +0000 (0:00:02.797) 0:02:27.648 ******** 2026-03-28 01:09:10.121415 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:09:10.121421 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:10.121427 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:09:10.121433 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:09:10.121439 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:09:10.121445 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:09:10.121455 | orchestrator | 2026-03-28 01:09:10.121467 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-03-28 01:09:10.121478 | orchestrator | Saturday 28 March 2026 01:06:32 +0000 (0:00:03.558) 0:02:31.206 ******** 2026-03-28 01:09:10.121491 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:10.121503 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:09:10.121520 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:09:10.121533 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:09:10.121544 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:09:10.121551 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:09:10.121557 | orchestrator | 2026-03-28 01:09:10.121563 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-03-28 01:09:10.121569 | orchestrator | Saturday 28 March 2026 01:06:34 +0000 (0:00:02.555) 0:02:33.762 ******** 2026-03-28 01:09:10.121575 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-28 01:09:10.121582 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:09:10.121588 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-28 01:09:10.121595 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:10.121601 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-28 01:09:10.121607 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:09:10.121613 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-28 01:09:10.121619 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:09:10.121643 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-28 01:09:10.121649 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:09:10.121655 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-28 01:09:10.121662 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:09:10.121668 | orchestrator | 2026-03-28 01:09:10.121674 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-03-28 01:09:10.121680 | orchestrator | Saturday 28 March 2026 01:06:37 +0000 (0:00:02.991) 0:02:36.753 ******** 2026-03-28 01:09:10.121693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:09:10.121708 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:10.121715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:09:10.121721 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:09:10.121731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:09:10.121738 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:09:10.121745 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:09:10.121751 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:09:10.121762 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:09:10.121774 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:09:10.121781 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:09:10.121787 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:09:10.121793 | orchestrator | 2026-03-28 01:09:10.121799 | orchestrator | TASK [service-check-containers : neutron | Check containers] ******************* 2026-03-28 01:09:10.121806 | orchestrator | Saturday 28 March 2026 01:06:39 +0000 (0:00:02.206) 0:02:38.960 ******** 2026-03-28 01:09:10.121812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:09:10.121819 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:09:10.121829 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-28 01:09:10.121840 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:09:10.121870 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-28 01:09:10.121878 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-28 01:09:10.121884 | orchestrator | 2026-03-28 01:09:10.121893 | orchestrator | TASK [service-check-containers : neutron | Notify handlers to restart containers] *** 2026-03-28 01:09:10.121899 | orchestrator | Saturday 28 March 2026 01:06:43 +0000 (0:00:03.934) 0:02:42.894 ******** 2026-03-28 01:09:10.121905 | orchestrator | changed: [testbed-node-0] => { 2026-03-28 01:09:10.121912 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 01:09:10.121918 | orchestrator | } 2026-03-28 01:09:10.121924 | orchestrator | changed: [testbed-node-1] => { 2026-03-28 01:09:10.121930 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 01:09:10.121936 | orchestrator | } 2026-03-28 01:09:10.121943 | orchestrator | changed: [testbed-node-2] => { 2026-03-28 01:09:10.121949 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 01:09:10.121955 | orchestrator | } 2026-03-28 01:09:10.121961 | orchestrator | changed: [testbed-node-3] => { 2026-03-28 01:09:10.121967 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 01:09:10.121973 | orchestrator | } 2026-03-28 01:09:10.121980 | orchestrator | changed: [testbed-node-4] => { 2026-03-28 01:09:10.121985 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 01:09:10.121992 | orchestrator | } 2026-03-28 01:09:10.121998 | orchestrator | changed: [testbed-node-5] => { 2026-03-28 01:09:10.122004 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 01:09:10.122010 | orchestrator | } 2026-03-28 01:09:10.122055 | orchestrator | 2026-03-28 01:09:10.122063 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-28 01:09:10.122070 | orchestrator | Saturday 28 March 2026 01:06:44 +0000 (0:00:00.648) 0:02:43.543 ******** 2026-03-28 01:09:10.122081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:09:10.122088 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:10.122095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:09:10.122101 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:09:10.122111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:09:10.122122 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:09:10.122138 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:09:10.122156 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:09:10.122168 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:09:10.122179 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:09:10.122195 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-28 01:09:10.122207 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:09:10.122219 | orchestrator | 2026-03-28 01:09:10.122231 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-28 01:09:10.122240 | orchestrator | Saturday 28 March 2026 01:06:47 +0000 (0:00:02.825) 0:02:46.369 ******** 2026-03-28 01:09:10.122246 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:09:10.122252 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:09:10.122258 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:09:10.122264 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:09:10.122270 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:09:10.122276 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:09:10.122282 | orchestrator | 2026-03-28 01:09:10.122289 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-03-28 01:09:10.122295 | orchestrator | Saturday 28 March 2026 01:06:47 +0000 (0:00:00.634) 0:02:47.004 ******** 2026-03-28 01:09:10.122301 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:09:10.122307 | orchestrator | 2026-03-28 01:09:10.122313 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-03-28 01:09:10.122319 | orchestrator | Saturday 28 March 2026 01:06:50 +0000 (0:00:02.549) 0:02:49.553 ******** 2026-03-28 01:09:10.122325 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:09:10.122331 | orchestrator | 2026-03-28 01:09:10.122337 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-03-28 01:09:10.122344 | orchestrator | Saturday 28 March 2026 01:06:53 +0000 (0:00:02.675) 0:02:52.229 ******** 2026-03-28 01:09:10.122350 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:09:10.122356 | orchestrator | 2026-03-28 01:09:10.122362 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-28 01:09:10.122368 | orchestrator | Saturday 28 March 2026 01:07:39 +0000 (0:00:46.359) 0:03:38.589 ******** 2026-03-28 01:09:10.122374 | orchestrator | 2026-03-28 01:09:10.122381 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-28 01:09:10.122387 | orchestrator | Saturday 28 March 2026 01:07:39 +0000 (0:00:00.234) 0:03:38.823 ******** 2026-03-28 01:09:10.122397 | orchestrator | 2026-03-28 01:09:10.122403 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-28 01:09:10.122409 | orchestrator | Saturday 28 March 2026 01:07:39 +0000 (0:00:00.152) 0:03:38.976 ******** 2026-03-28 01:09:10.122415 | orchestrator | 2026-03-28 01:09:10.122421 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-28 01:09:10.122427 | orchestrator | Saturday 28 March 2026 01:07:40 +0000 (0:00:00.101) 0:03:39.077 ******** 2026-03-28 01:09:10.122433 | orchestrator | 2026-03-28 01:09:10.122440 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-28 01:09:10.122446 | orchestrator | Saturday 28 March 2026 01:07:40 +0000 (0:00:00.123) 0:03:39.201 ******** 2026-03-28 01:09:10.122456 | orchestrator | 2026-03-28 01:09:10.122471 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-28 01:09:10.122481 | orchestrator | Saturday 28 March 2026 01:07:40 +0000 (0:00:00.174) 0:03:39.375 ******** 2026-03-28 01:09:10.122492 | orchestrator | 2026-03-28 01:09:10.122503 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-03-28 01:09:10.122514 | orchestrator | Saturday 28 March 2026 01:07:40 +0000 (0:00:00.210) 0:03:39.585 ******** 2026-03-28 01:09:10.122520 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:09:10.122527 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:09:10.122533 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:09:10.122539 | orchestrator | 2026-03-28 01:09:10.122545 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-03-28 01:09:10.122551 | orchestrator | Saturday 28 March 2026 01:08:16 +0000 (0:00:35.883) 0:04:15.469 ******** 2026-03-28 01:09:10.122557 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:09:10.122564 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:09:10.122570 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:09:10.122576 | orchestrator | 2026-03-28 01:09:10.122582 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:09:10.122589 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-28 01:09:10.122596 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-28 01:09:10.122602 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-28 01:09:10.122608 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-28 01:09:10.122619 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-28 01:09:10.122640 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-28 01:09:10.122647 | orchestrator | 2026-03-28 01:09:10.122653 | orchestrator | 2026-03-28 01:09:10.122659 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:09:10.122666 | orchestrator | Saturday 28 March 2026 01:09:07 +0000 (0:00:51.406) 0:05:06.875 ******** 2026-03-28 01:09:10.122672 | orchestrator | =============================================================================== 2026-03-28 01:09:10.122678 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 51.41s 2026-03-28 01:09:10.122684 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 46.36s 2026-03-28 01:09:10.122690 | orchestrator | neutron : Restart neutron-server container ----------------------------- 35.88s 2026-03-28 01:09:10.122697 | orchestrator | service-ks-register : neutron | Granting/revoking user roles ------------ 8.10s 2026-03-28 01:09:10.122703 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 8.09s 2026-03-28 01:09:10.122714 | orchestrator | service-ks-register : neutron | Creating/deleting endpoints ------------- 7.63s 2026-03-28 01:09:10.122720 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 5.89s 2026-03-28 01:09:10.122726 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 5.84s 2026-03-28 01:09:10.122732 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 5.17s 2026-03-28 01:09:10.122738 | orchestrator | neutron : Copying over eswitchd.conf ------------------------------------ 4.83s 2026-03-28 01:09:10.122744 | orchestrator | neutron : Copying over config.json files for services ------------------- 4.78s 2026-03-28 01:09:10.122751 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.50s 2026-03-28 01:09:10.122757 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.43s 2026-03-28 01:09:10.122763 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 4.37s 2026-03-28 01:09:10.122769 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 4.16s 2026-03-28 01:09:10.122776 | orchestrator | neutron : Copying over existing policy file ----------------------------- 4.04s 2026-03-28 01:09:10.122782 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.96s 2026-03-28 01:09:10.122788 | orchestrator | service-ks-register : neutron | Creating/deleting services -------------- 3.95s 2026-03-28 01:09:10.122794 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 3.95s 2026-03-28 01:09:10.122800 | orchestrator | service-check-containers : neutron | Check containers ------------------- 3.93s 2026-03-28 01:09:10.122807 | orchestrator | 2026-03-28 01:09:10 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:09:13.154808 | orchestrator | 2026-03-28 01:09:13 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:09:13.155299 | orchestrator | 2026-03-28 01:09:13 | INFO  | Task 5bf525cb-090a-4924-8e53-9d3d44173427 is in state STARTED 2026-03-28 01:09:13.157502 | orchestrator | 2026-03-28 01:09:13 | INFO  | Task 3a9b35eb-d52c-4456-bf94-33aae35f18d5 is in state STARTED 2026-03-28 01:09:13.158101 | orchestrator | 2026-03-28 01:09:13 | INFO  | Task 2232375e-5828-4d73-a2ac-ed127e98f85a is in state STARTED 2026-03-28 01:09:13.158117 | orchestrator | 2026-03-28 01:09:13 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:09:16.205026 | orchestrator | 2026-03-28 01:09:16 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:09:16.207395 | orchestrator | 2026-03-28 01:09:16 | INFO  | Task 5bf525cb-090a-4924-8e53-9d3d44173427 is in state STARTED 2026-03-28 01:09:16.209940 | orchestrator | 2026-03-28 01:09:16 | INFO  | Task 3a9b35eb-d52c-4456-bf94-33aae35f18d5 is in state STARTED 2026-03-28 01:09:16.212254 | orchestrator | 2026-03-28 01:09:16 | INFO  | Task 2232375e-5828-4d73-a2ac-ed127e98f85a is in state STARTED 2026-03-28 01:09:16.212290 | orchestrator | 2026-03-28 01:09:16 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:09:19.258001 | orchestrator | 2026-03-28 01:09:19 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:09:19.261589 | orchestrator | 2026-03-28 01:09:19 | INFO  | Task 5bf525cb-090a-4924-8e53-9d3d44173427 is in state STARTED 2026-03-28 01:09:19.263840 | orchestrator | 2026-03-28 01:09:19 | INFO  | Task 3a9b35eb-d52c-4456-bf94-33aae35f18d5 is in state STARTED 2026-03-28 01:09:19.267160 | orchestrator | 2026-03-28 01:09:19 | INFO  | Task 2232375e-5828-4d73-a2ac-ed127e98f85a is in state STARTED 2026-03-28 01:09:19.267225 | orchestrator | 2026-03-28 01:09:19 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:09:22.315748 | orchestrator | 2026-03-28 01:09:22 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:09:22.316229 | orchestrator | 2026-03-28 01:09:22 | INFO  | Task 5bf525cb-090a-4924-8e53-9d3d44173427 is in state STARTED 2026-03-28 01:09:22.317897 | orchestrator | 2026-03-28 01:09:22 | INFO  | Task 3a9b35eb-d52c-4456-bf94-33aae35f18d5 is in state STARTED 2026-03-28 01:09:22.319083 | orchestrator | 2026-03-28 01:09:22 | INFO  | Task 2232375e-5828-4d73-a2ac-ed127e98f85a is in state STARTED 2026-03-28 01:09:22.319145 | orchestrator | 2026-03-28 01:09:22 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:09:25.387008 | orchestrator | 2026-03-28 01:09:25 | INFO  | Task bea646f3-e472-4eb0-9524-f46f1934e984 is in state STARTED 2026-03-28 01:09:25.387659 | orchestrator | 2026-03-28 01:09:25 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:09:25.389076 | orchestrator | 2026-03-28 01:09:25 | INFO  | Task 5bf525cb-090a-4924-8e53-9d3d44173427 is in state STARTED 2026-03-28 01:09:25.389769 | orchestrator | 2026-03-28 01:09:25 | INFO  | Task 3a9b35eb-d52c-4456-bf94-33aae35f18d5 is in state SUCCESS 2026-03-28 01:09:25.391081 | orchestrator | 2026-03-28 01:09:25 | INFO  | Task 2232375e-5828-4d73-a2ac-ed127e98f85a is in state STARTED 2026-03-28 01:09:25.391160 | orchestrator | 2026-03-28 01:09:25 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:09:28.430716 | orchestrator | 2026-03-28 01:09:28 | INFO  | Task bea646f3-e472-4eb0-9524-f46f1934e984 is in state STARTED 2026-03-28 01:09:28.433327 | orchestrator | 2026-03-28 01:09:28 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:09:28.433360 | orchestrator | 2026-03-28 01:09:28 | INFO  | Task 5bf525cb-090a-4924-8e53-9d3d44173427 is in state STARTED 2026-03-28 01:09:28.434669 | orchestrator | 2026-03-28 01:09:28 | INFO  | Task 2232375e-5828-4d73-a2ac-ed127e98f85a is in state STARTED 2026-03-28 01:09:28.434711 | orchestrator | 2026-03-28 01:09:28 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:09:31.464043 | orchestrator | 2026-03-28 01:09:31 | INFO  | Task bea646f3-e472-4eb0-9524-f46f1934e984 is in state STARTED 2026-03-28 01:09:31.464686 | orchestrator | 2026-03-28 01:09:31 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:09:31.465480 | orchestrator | 2026-03-28 01:09:31 | INFO  | Task 5bf525cb-090a-4924-8e53-9d3d44173427 is in state STARTED 2026-03-28 01:09:31.467052 | orchestrator | 2026-03-28 01:09:31 | INFO  | Task 2232375e-5828-4d73-a2ac-ed127e98f85a is in state STARTED 2026-03-28 01:09:31.467089 | orchestrator | 2026-03-28 01:09:31 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:09:34.510728 | orchestrator | 2026-03-28 01:09:34 | INFO  | Task bea646f3-e472-4eb0-9524-f46f1934e984 is in state STARTED 2026-03-28 01:09:34.513794 | orchestrator | 2026-03-28 01:09:34 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:09:34.516120 | orchestrator | 2026-03-28 01:09:34 | INFO  | Task 5bf525cb-090a-4924-8e53-9d3d44173427 is in state STARTED 2026-03-28 01:09:34.519378 | orchestrator | 2026-03-28 01:09:34 | INFO  | Task 2232375e-5828-4d73-a2ac-ed127e98f85a is in state STARTED 2026-03-28 01:09:34.519460 | orchestrator | 2026-03-28 01:09:34 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:09:37.561647 | orchestrator | 2026-03-28 01:09:37 | INFO  | Task bea646f3-e472-4eb0-9524-f46f1934e984 is in state STARTED 2026-03-28 01:09:37.562684 | orchestrator | 2026-03-28 01:09:37 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:09:37.563548 | orchestrator | 2026-03-28 01:09:37 | INFO  | Task 5bf525cb-090a-4924-8e53-9d3d44173427 is in state STARTED 2026-03-28 01:09:37.564950 | orchestrator | 2026-03-28 01:09:37 | INFO  | Task 2232375e-5828-4d73-a2ac-ed127e98f85a is in state STARTED 2026-03-28 01:09:37.565039 | orchestrator | 2026-03-28 01:09:37 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:09:40.605029 | orchestrator | 2026-03-28 01:09:40 | INFO  | Task bea646f3-e472-4eb0-9524-f46f1934e984 is in state STARTED 2026-03-28 01:11:40.716392 | orchestrator | 2026-03-28 01:11:40 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:11:40.721326 | orchestrator | 2026-03-28 01:11:40 | INFO  | Task 5bf525cb-090a-4924-8e53-9d3d44173427 is in state SUCCESS 2026-03-28 01:11:40.723619 | orchestrator | 2026-03-28 01:11:40.723668 | orchestrator | 2026-03-28 01:11:40.723677 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-03-28 01:11:40.723686 | orchestrator | 2026-03-28 01:11:40.723694 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-03-28 01:11:40.723702 | orchestrator | Saturday 28 March 2026 01:08:15 +0000 (0:00:00.094) 0:00:00.094 ******** 2026-03-28 01:11:40.723710 | orchestrator | changed: [localhost] 2026-03-28 01:11:40.723718 | orchestrator | 2026-03-28 01:11:40.723725 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-03-28 01:11:40.723733 | orchestrator | Saturday 28 March 2026 01:08:16 +0000 (0:00:00.970) 0:00:01.065 ******** 2026-03-28 01:11:40.723740 | orchestrator | changed: [localhost] 2026-03-28 01:11:40.723747 | orchestrator | 2026-03-28 01:11:40.723754 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-03-28 01:11:40.723762 | orchestrator | Saturday 28 March 2026 01:08:53 +0000 (0:00:36.340) 0:00:37.405 ******** 2026-03-28 01:11:40.723780 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent kernel (3 retries left). 2026-03-28 01:11:40.723788 | orchestrator | changed: [localhost] 2026-03-28 01:11:40.723795 | orchestrator | 2026-03-28 01:11:40.723803 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 01:11:40.723810 | orchestrator | 2026-03-28 01:11:40.723817 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 01:11:40.723824 | orchestrator | Saturday 28 March 2026 01:09:20 +0000 (0:00:26.988) 0:01:04.394 ******** 2026-03-28 01:11:40.723831 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:11:40.723839 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:11:40.723846 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:11:40.723853 | orchestrator | 2026-03-28 01:11:40.723861 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 01:11:40.723868 | orchestrator | Saturday 28 March 2026 01:09:20 +0000 (0:00:00.662) 0:01:05.057 ******** 2026-03-28 01:11:40.723876 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2026-03-28 01:11:40.723919 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2026-03-28 01:11:40.723928 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2026-03-28 01:11:40.723935 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2026-03-28 01:11:40.723943 | orchestrator | 2026-03-28 01:11:40.723950 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2026-03-28 01:11:40.723958 | orchestrator | skipping: no hosts matched 2026-03-28 01:11:40.723966 | orchestrator | 2026-03-28 01:11:40.723973 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:11:40.723981 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 01:11:40.723991 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 01:11:40.724000 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 01:11:40.724008 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 01:11:40.724039 | orchestrator | 2026-03-28 01:11:40.724142 | orchestrator | 2026-03-28 01:11:40.724151 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:11:40.724158 | orchestrator | Saturday 28 March 2026 01:09:21 +0000 (0:00:01.005) 0:01:06.062 ******** 2026-03-28 01:11:40.724165 | orchestrator | =============================================================================== 2026-03-28 01:11:40.724173 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 36.34s 2026-03-28 01:11:40.724180 | orchestrator | Download ironic-agent kernel ------------------------------------------- 26.99s 2026-03-28 01:11:40.724187 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.00s 2026-03-28 01:11:40.724194 | orchestrator | Ensure the destination directory exists --------------------------------- 0.97s 2026-03-28 01:11:40.724271 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.66s 2026-03-28 01:11:40.724281 | orchestrator | 2026-03-28 01:11:40.724289 | orchestrator | 2026-03-28 01:11:40.724297 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 01:11:40.724306 | orchestrator | 2026-03-28 01:11:40.724314 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 01:11:40.724322 | orchestrator | Saturday 28 March 2026 01:06:35 +0000 (0:00:00.676) 0:00:00.676 ******** 2026-03-28 01:11:40.724330 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:11:40.724338 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:11:40.724346 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:11:40.724354 | orchestrator | 2026-03-28 01:11:40.724363 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 01:11:40.724371 | orchestrator | Saturday 28 March 2026 01:06:36 +0000 (0:00:00.752) 0:00:01.428 ******** 2026-03-28 01:11:40.724380 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-03-28 01:11:40.724388 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-03-28 01:11:40.724397 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-03-28 01:11:40.724405 | orchestrator | 2026-03-28 01:11:40.724413 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-03-28 01:11:40.724421 | orchestrator | 2026-03-28 01:11:40.724428 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-28 01:11:40.724435 | orchestrator | Saturday 28 March 2026 01:06:36 +0000 (0:00:00.728) 0:00:02.156 ******** 2026-03-28 01:11:40.724443 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:11:40.724547 | orchestrator | 2026-03-28 01:11:40.724554 | orchestrator | TASK [service-ks-register : designate | Creating/deleting services] ************ 2026-03-28 01:11:40.724576 | orchestrator | Saturday 28 March 2026 01:06:37 +0000 (0:00:00.905) 0:00:03.061 ******** 2026-03-28 01:11:40.724584 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-03-28 01:11:40.724592 | orchestrator | 2026-03-28 01:11:40.724599 | orchestrator | TASK [service-ks-register : designate | Creating/deleting endpoints] *********** 2026-03-28 01:11:40.724607 | orchestrator | Saturday 28 March 2026 01:06:42 +0000 (0:00:04.488) 0:00:07.550 ******** 2026-03-28 01:11:40.724614 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-03-28 01:11:40.724622 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-03-28 01:11:40.724630 | orchestrator | 2026-03-28 01:11:40.724637 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-03-28 01:11:40.724645 | orchestrator | Saturday 28 March 2026 01:06:50 +0000 (0:00:07.806) 0:00:15.357 ******** 2026-03-28 01:11:40.724652 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-28 01:11:40.724659 | orchestrator | 2026-03-28 01:11:40.724667 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-03-28 01:11:40.724674 | orchestrator | Saturday 28 March 2026 01:06:54 +0000 (0:00:04.002) 0:00:19.359 ******** 2026-03-28 01:11:40.724691 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-03-28 01:11:40.724699 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-28 01:11:40.724706 | orchestrator | 2026-03-28 01:11:40.724713 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-03-28 01:11:40.724721 | orchestrator | Saturday 28 March 2026 01:06:58 +0000 (0:00:04.654) 0:00:24.014 ******** 2026-03-28 01:11:40.724728 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-28 01:11:40.724736 | orchestrator | 2026-03-28 01:11:40.724743 | orchestrator | TASK [service-ks-register : designate | Granting/revoking user roles] ********** 2026-03-28 01:11:40.724751 | orchestrator | Saturday 28 March 2026 01:07:02 +0000 (0:00:03.988) 0:00:28.003 ******** 2026-03-28 01:11:40.724758 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-03-28 01:11:40.724765 | orchestrator | 2026-03-28 01:11:40.724772 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-03-28 01:11:40.724780 | orchestrator | Saturday 28 March 2026 01:07:07 +0000 (0:00:04.543) 0:00:32.546 ******** 2026-03-28 01:11:40.724791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:11:40.724809 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:11:40.724824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:11:40.724839 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 01:11:40.724848 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 01:11:40.724857 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:40.724869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 01:11:40.724877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:40.724890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:40.724900 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:40.724937 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:40.724947 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:40.724955 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:40.724967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:40.724975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:40.724989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:40.725003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:40.725011 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:40.725018 | orchestrator | 2026-03-28 01:11:40.725049 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-03-28 01:11:40.725057 | orchestrator | Saturday 28 March 2026 01:07:12 +0000 (0:00:04.692) 0:00:37.239 ******** 2026-03-28 01:11:40.725065 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:11:40.725072 | orchestrator | 2026-03-28 01:11:40.725080 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-03-28 01:11:40.725087 | orchestrator | Saturday 28 March 2026 01:07:12 +0000 (0:00:00.128) 0:00:37.368 ******** 2026-03-28 01:11:40.725095 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:11:40.725232 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:11:40.725251 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:11:40.725263 | orchestrator | 2026-03-28 01:11:40.725275 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-28 01:11:40.725286 | orchestrator | Saturday 28 March 2026 01:07:12 +0000 (0:00:00.347) 0:00:37.716 ******** 2026-03-28 01:11:40.725297 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:11:40.725307 | orchestrator | 2026-03-28 01:11:40.725319 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-03-28 01:11:40.725330 | orchestrator | Saturday 28 March 2026 01:07:13 +0000 (0:00:00.613) 0:00:38.330 ******** 2026-03-28 01:11:40.725348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:11:40.725532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:11:40.725559 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:11:40.725567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 01:11:40.725575 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 01:11:40.725588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 01:11:40.725596 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:40.725623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:40.725631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:40.725639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:40.725647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:40.725655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:40.725666 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:40.725679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:40.725691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:40.725699 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:40.725707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:40.725714 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:40.725721 | orchestrator | 2026-03-28 01:11:40.725729 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-03-28 01:11:40.725792 | orchestrator | Saturday 28 March 2026 01:07:21 +0000 (0:00:07.998) 0:00:46.329 ******** 2026-03-28 01:11:40.725812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:11:40.726630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-28 01:11:40.726657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 01:11:40.726665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 01:11:40.726673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 01:11:40.726681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:11:40.726689 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:11:40.726705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:11:40.726729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:11:40.726738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-28 01:11:40.726746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-28 01:11:40.726753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 01:11:40.726764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 01:11:40.726776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 01:11:40.726787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 01:11:40.726795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 01:11:40.726802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:11:40.726809 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:11:40.726817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 01:11:40.726824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:11:40.726837 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:11:40.726844 | orchestrator | 2026-03-28 01:11:40.726852 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-03-28 01:11:40.726859 | orchestrator | Saturday 28 March 2026 01:07:24 +0000 (0:00:02.910) 0:00:49.239 ******** 2026-03-28 01:11:40.726870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:11:40.726883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:11:40.726891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-28 01:11:40.726898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-28 01:11:40.726906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 01:11:40.726922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 01:11:40.726929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 01:11:40.726942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 01:11:40.726950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 01:11:40.726958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:11:40.726965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:11:40.726977 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:11:40.726989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-28 01:11:40.726996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 01:11:40.727009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 01:11:40.727017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:11:40.727024 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:11:40.727032 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 01:11:40.727039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 01:11:40.727064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:11:40.727072 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:11:40.727079 | orchestrator | 2026-03-28 01:11:40.727086 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-03-28 01:11:40.727097 | orchestrator | Saturday 28 March 2026 01:07:26 +0000 (0:00:02.609) 0:00:51.848 ******** 2026-03-28 01:11:40.727104 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:11:40.727118 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:11:40.727126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:11:40.727161 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 01:11:40.727180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 01:11:40.727201 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 01:11:40.727219 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:40.727231 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:40.727244 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:40.727263 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:40.727275 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:40.727293 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:40.727306 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:40.727325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:40.727338 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:40.727351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:40.727377 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:40.727391 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:40.727403 | orchestrator | 2026-03-28 01:11:40.727416 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-03-28 01:11:40.727425 | orchestrator | Saturday 28 March 2026 01:07:34 +0000 (0:00:07.394) 0:00:59.243 ******** 2026-03-28 01:11:40.727434 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:11:40.727482 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:11:40.727499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:11:40.727521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 01:11:40.727540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 01:11:40.727549 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 01:11:40.727564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:40.727574 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:40.727588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:40.727596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:40.727604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:40.727616 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:40.727623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:40.727636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:40.727643 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:40.727656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:40.727663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:40.727674 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:40.727682 | orchestrator | 2026-03-28 01:11:40.727689 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-03-28 01:11:40.727697 | orchestrator | Saturday 28 March 2026 01:07:57 +0000 (0:00:23.674) 0:01:22.918 ******** 2026-03-28 01:11:40.727704 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-28 01:11:40.727711 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-28 01:11:40.727718 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-28 01:11:40.727726 | orchestrator | 2026-03-28 01:11:40.727733 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-03-28 01:11:40.727740 | orchestrator | Saturday 28 March 2026 01:08:02 +0000 (0:00:04.351) 0:01:27.269 ******** 2026-03-28 01:11:40.727747 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-28 01:11:40.727754 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-28 01:11:40.727761 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-28 01:11:40.727768 | orchestrator | 2026-03-28 01:11:40.727775 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-03-28 01:11:40.727782 | orchestrator | Saturday 28 March 2026 01:08:05 +0000 (0:00:02.969) 0:01:30.238 ******** 2026-03-28 01:11:40.727794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:11:40.727807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:11:40.727814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:11:40.727829 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 01:11:40.727837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 01:11:40.727861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 01:11:40.727872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 01:11:40.727882 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 01:11:40.727902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 01:11:40.727921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 01:11:40.727933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 01:11:40.727953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 01:11:40.727977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 01:11:40.727991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 01:11:40.728004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 01:11:40.728016 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:40.728034 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:40.728046 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:40.728065 | orchestrator | 2026-03-28 01:11:40.728073 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-03-28 01:11:40.728080 | orchestrator | Saturday 28 March 2026 01:08:08 +0000 (0:00:03.056) 0:01:33.295 ******** 2026-03-28 01:11:40.728094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:11:40.728102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:11:40.728109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:11:40.728121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 01:11:40.728206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 01:11:40.728216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 01:11:40.728223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 01:11:40.728231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 01:11:40.728238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 01:11:40.728250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 01:11:40.728257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 01:11:40.728275 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 01:11:40.728286 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 01:11:40.728299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 01:11:40.728312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 01:11:40.728325 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:40.728344 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:40.728366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:40.728378 | orchestrator | 2026-03-28 01:11:40.728391 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-28 01:11:40.728404 | orchestrator | Saturday 28 March 2026 01:08:10 +0000 (0:00:02.725) 0:01:36.021 ******** 2026-03-28 01:11:40.728411 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:11:40.728419 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:11:40.728426 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:11:40.728433 | orchestrator | 2026-03-28 01:11:40.728441 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-03-28 01:11:40.728448 | orchestrator | Saturday 28 March 2026 01:08:11 +0000 (0:00:00.298) 0:01:36.320 ******** 2026-03-28 01:11:40.728484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:11:40.728492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-28 01:11:40.728500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 01:11:40.728513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 01:11:40.728525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 01:11:40.728538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:11:40.728545 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:11:40.728553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:11:40.728561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-28 01:11:40.728573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:11:40.728589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 01:11:40.728601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-28 01:11:40.728631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 01:11:40.728639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 01:11:40.728646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 01:11:40.728654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 01:11:40.728670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:11:40.728678 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:11:40.728686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 01:11:40.728698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:11:40.728706 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:11:40.728713 | orchestrator | 2026-03-28 01:11:40.728720 | orchestrator | TASK [service-check-containers : designate | Check containers] ***************** 2026-03-28 01:11:40.728728 | orchestrator | Saturday 28 March 2026 01:08:11 +0000 (0:00:00.654) 0:01:36.974 ******** 2026-03-28 01:11:40.728735 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:11:40.728743 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:11:40.728761 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:11:40.728773 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 01:11:40.728780 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 01:11:40.728788 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-28 01:11:40.728795 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:40.728808 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:40.728820 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:40.728827 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:40.728840 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:40.728847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:40.728855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:40.728863 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:40.728875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:40.728886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:40.728894 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:40.728914 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:40.728927 | orchestrator | 2026-03-28 01:11:40.728940 | orchestrator | TASK [service-check-containers : designate | Notify handlers to restart containers] *** 2026-03-28 01:11:40.728953 | orchestrator | Saturday 28 March 2026 01:08:17 +0000 (0:00:05.676) 0:01:42.651 ******** 2026-03-28 01:11:40.728964 | orchestrator | changed: [testbed-node-0] => { 2026-03-28 01:11:40.728976 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 01:11:40.728989 | orchestrator | } 2026-03-28 01:11:40.729002 | orchestrator | changed: [testbed-node-1] => { 2026-03-28 01:11:40.729014 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 01:11:40.729027 | orchestrator | } 2026-03-28 01:11:40.729040 | orchestrator | changed: [testbed-node-2] => { 2026-03-28 01:11:40.729052 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 01:11:40.729064 | orchestrator | } 2026-03-28 01:11:40.729077 | orchestrator | 2026-03-28 01:11:40.729089 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-28 01:11:40.729102 | orchestrator | Saturday 28 March 2026 01:08:18 +0000 (0:00:00.802) 0:01:43.454 ******** 2026-03-28 01:11:40.729122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:11:40.729134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-28 01:11:40.729153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 01:11:40.729166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 01:11:40.729186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 01:11:40.729199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:11:40.729218 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:11:40.729230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:11:40.729243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-28 01:11:40.729260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 01:11:40.729279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:11:40.729291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-28 01:11:40.729311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-28 01:11:40.729324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 01:11:40.729337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-28 01:11:40.729357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 01:11:40.729371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:11:40.729383 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:11:40.729404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-28 01:11:40.729429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:11:40.729441 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:11:40.729503 | orchestrator | 2026-03-28 01:11:40.729512 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-28 01:11:40.729519 | orchestrator | Saturday 28 March 2026 01:08:20 +0000 (0:00:01.860) 0:01:45.314 ******** 2026-03-28 01:11:40.729527 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:11:40.729534 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:11:40.729541 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:11:40.729548 | orchestrator | 2026-03-28 01:11:40.729555 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-03-28 01:11:40.729562 | orchestrator | Saturday 28 March 2026 01:08:20 +0000 (0:00:00.539) 0:01:45.854 ******** 2026-03-28 01:11:40.729570 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-03-28 01:11:40.729577 | orchestrator | 2026-03-28 01:11:40.729584 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-03-28 01:11:40.729591 | orchestrator | Saturday 28 March 2026 01:08:23 +0000 (0:00:02.758) 0:01:48.613 ******** 2026-03-28 01:11:40.729599 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-28 01:11:40.729606 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-03-28 01:11:40.729613 | orchestrator | 2026-03-28 01:11:40.729620 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-03-28 01:11:40.729627 | orchestrator | Saturday 28 March 2026 01:08:27 +0000 (0:00:03.561) 0:01:52.175 ******** 2026-03-28 01:11:40.729634 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:11:40.729641 | orchestrator | 2026-03-28 01:11:40.729648 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-28 01:11:40.729655 | orchestrator | Saturday 28 March 2026 01:08:43 +0000 (0:00:16.687) 0:02:08.862 ******** 2026-03-28 01:11:40.729662 | orchestrator | 2026-03-28 01:11:40.729669 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-28 01:11:40.729676 | orchestrator | Saturday 28 March 2026 01:08:43 +0000 (0:00:00.068) 0:02:08.931 ******** 2026-03-28 01:11:40.729683 | orchestrator | 2026-03-28 01:11:40.729691 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-28 01:11:40.729698 | orchestrator | Saturday 28 March 2026 01:08:43 +0000 (0:00:00.081) 0:02:09.012 ******** 2026-03-28 01:11:40.729705 | orchestrator | 2026-03-28 01:11:40.729712 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-03-28 01:11:40.729719 | orchestrator | Saturday 28 March 2026 01:08:43 +0000 (0:00:00.076) 0:02:09.089 ******** 2026-03-28 01:11:40.729726 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:11:40.729733 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:11:40.729739 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:11:40.729746 | orchestrator | 2026-03-28 01:11:40.729757 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-03-28 01:11:40.729764 | orchestrator | Saturday 28 March 2026 01:08:56 +0000 (0:00:12.320) 0:02:21.409 ******** 2026-03-28 01:11:40.729770 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:11:40.729777 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:11:40.729784 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:11:40.729790 | orchestrator | 2026-03-28 01:11:40.729797 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-03-28 01:11:40.729803 | orchestrator | Saturday 28 March 2026 01:09:02 +0000 (0:00:05.967) 0:02:27.377 ******** 2026-03-28 01:11:40.729815 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:11:40.729822 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:11:40.729829 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:11:40.729835 | orchestrator | 2026-03-28 01:11:40.729842 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-03-28 01:11:40.729848 | orchestrator | Saturday 28 March 2026 01:09:11 +0000 (0:00:09.318) 0:02:36.696 ******** 2026-03-28 01:11:40.729855 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:11:40.729862 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:11:40.729868 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:11:40.729875 | orchestrator | 2026-03-28 01:11:40.729881 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-03-28 01:11:40.729888 | orchestrator | Saturday 28 March 2026 01:09:22 +0000 (0:00:10.821) 0:02:47.517 ******** 2026-03-28 01:11:40.729894 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:11:40.729901 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:11:40.729908 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:11:40.729914 | orchestrator | 2026-03-28 01:11:40.729921 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-03-28 01:11:40.729932 | orchestrator | Saturday 28 March 2026 01:09:35 +0000 (0:00:13.538) 0:03:01.056 ******** 2026-03-28 01:11:40.729939 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:11:40.729946 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:11:40.729953 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:11:40.729959 | orchestrator | 2026-03-28 01:11:40.729966 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-03-28 01:11:40.729973 | orchestrator | Saturday 28 March 2026 01:09:47 +0000 (0:00:11.594) 0:03:12.650 ******** 2026-03-28 01:11:40.729979 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:11:40.729986 | orchestrator | 2026-03-28 01:11:40.729993 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:11:40.730000 | orchestrator | testbed-node-0 : ok=30  changed=24  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-28 01:11:40.730008 | orchestrator | testbed-node-1 : ok=20  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-28 01:11:40.730055 | orchestrator | testbed-node-2 : ok=20  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-28 01:11:40.730062 | orchestrator | 2026-03-28 01:11:40.730069 | orchestrator | 2026-03-28 01:11:40.730075 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:11:40.730082 | orchestrator | Saturday 28 March 2026 01:09:55 +0000 (0:00:08.230) 0:03:20.881 ******** 2026-03-28 01:11:40.730089 | orchestrator | =============================================================================== 2026-03-28 01:11:40.730096 | orchestrator | designate : Copying over designate.conf -------------------------------- 23.67s 2026-03-28 01:11:40.730103 | orchestrator | designate : Running Designate bootstrap container ---------------------- 16.69s 2026-03-28 01:11:40.730109 | orchestrator | designate : Restart designate-mdns container --------------------------- 13.54s 2026-03-28 01:11:40.730116 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 12.32s 2026-03-28 01:11:40.730123 | orchestrator | designate : Restart designate-worker container ------------------------- 11.59s 2026-03-28 01:11:40.730132 | orchestrator | designate : Restart designate-producer container ----------------------- 10.82s 2026-03-28 01:11:40.730143 | orchestrator | designate : Restart designate-central container ------------------------- 9.32s 2026-03-28 01:11:40.730154 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 8.23s 2026-03-28 01:11:40.730165 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 8.00s 2026-03-28 01:11:40.730176 | orchestrator | service-ks-register : designate | Creating/deleting endpoints ----------- 7.81s 2026-03-28 01:11:40.730193 | orchestrator | designate : Copying over config.json files for services ----------------- 7.39s 2026-03-28 01:11:40.730204 | orchestrator | designate : Restart designate-api container ----------------------------- 5.97s 2026-03-28 01:11:40.730215 | orchestrator | service-check-containers : designate | Check containers ----------------- 5.68s 2026-03-28 01:11:40.730227 | orchestrator | designate : Ensuring config directories exist --------------------------- 4.69s 2026-03-28 01:11:40.730237 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.65s 2026-03-28 01:11:40.730247 | orchestrator | service-ks-register : designate | Granting/revoking user roles ---------- 4.54s 2026-03-28 01:11:40.730257 | orchestrator | service-ks-register : designate | Creating/deleting services ------------ 4.49s 2026-03-28 01:11:40.730268 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 4.35s 2026-03-28 01:11:40.730279 | orchestrator | service-ks-register : designate | Creating projects --------------------- 4.00s 2026-03-28 01:11:40.730291 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.99s 2026-03-28 01:11:40.730300 | orchestrator | 2026-03-28 01:11:40.730307 | orchestrator | 2026-03-28 01:11:40.730313 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 01:11:40.730320 | orchestrator | 2026-03-28 01:11:40.730331 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 01:11:40.730338 | orchestrator | Saturday 28 March 2026 01:09:11 +0000 (0:00:00.320) 0:00:00.320 ******** 2026-03-28 01:11:40.730344 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:11:40.730352 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:11:40.730358 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:11:40.730365 | orchestrator | 2026-03-28 01:11:40.730372 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 01:11:40.730378 | orchestrator | Saturday 28 March 2026 01:09:11 +0000 (0:00:00.314) 0:00:00.634 ******** 2026-03-28 01:11:40.730385 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-03-28 01:11:40.730392 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-03-28 01:11:40.730398 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-03-28 01:11:40.730405 | orchestrator | 2026-03-28 01:11:40.730412 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-03-28 01:11:40.730418 | orchestrator | 2026-03-28 01:11:40.730425 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-28 01:11:40.730432 | orchestrator | Saturday 28 March 2026 01:09:12 +0000 (0:00:00.568) 0:00:01.202 ******** 2026-03-28 01:11:40.730438 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:11:40.730446 | orchestrator | 2026-03-28 01:11:40.730476 | orchestrator | TASK [service-ks-register : placement | Creating/deleting services] ************ 2026-03-28 01:11:40.730484 | orchestrator | Saturday 28 March 2026 01:09:12 +0000 (0:00:00.758) 0:00:01.961 ******** 2026-03-28 01:11:40.730490 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-03-28 01:11:40.730497 | orchestrator | 2026-03-28 01:11:40.730509 | orchestrator | TASK [service-ks-register : placement | Creating/deleting endpoints] *********** 2026-03-28 01:11:40.730516 | orchestrator | Saturday 28 March 2026 01:09:17 +0000 (0:00:04.102) 0:00:06.063 ******** 2026-03-28 01:11:40.730523 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-03-28 01:11:40.730530 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-03-28 01:11:40.730536 | orchestrator | 2026-03-28 01:11:40.730543 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-03-28 01:11:40.730550 | orchestrator | Saturday 28 March 2026 01:09:24 +0000 (0:00:07.701) 0:00:13.764 ******** 2026-03-28 01:11:40.730556 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-28 01:11:40.730563 | orchestrator | 2026-03-28 01:11:40.730569 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-03-28 01:11:40.730582 | orchestrator | Saturday 28 March 2026 01:09:29 +0000 (0:00:04.231) 0:00:17.996 ******** 2026-03-28 01:11:40.730589 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-03-28 01:11:40.730596 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-28 01:11:40.730603 | orchestrator | 2026-03-28 01:11:40.730609 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-03-28 01:11:40.730616 | orchestrator | Saturday 28 March 2026 01:09:33 +0000 (0:00:04.568) 0:00:22.564 ******** 2026-03-28 01:11:40.730622 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-28 01:11:40.730629 | orchestrator | 2026-03-28 01:11:40.730636 | orchestrator | TASK [service-ks-register : placement | Granting/revoking user roles] ********** 2026-03-28 01:11:40.730642 | orchestrator | Saturday 28 March 2026 01:09:38 +0000 (0:00:04.449) 0:00:27.014 ******** 2026-03-28 01:11:40.730649 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-03-28 01:11:40.730656 | orchestrator | 2026-03-28 01:11:40.730662 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-28 01:11:40.730669 | orchestrator | Saturday 28 March 2026 01:09:42 +0000 (0:00:04.636) 0:00:31.651 ******** 2026-03-28 01:11:40.730676 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:11:40.730683 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:11:40.730689 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:11:40.730696 | orchestrator | 2026-03-28 01:11:40.730702 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-03-28 01:11:40.730709 | orchestrator | Saturday 28 March 2026 01:09:42 +0000 (0:00:00.283) 0:00:31.935 ******** 2026-03-28 01:11:40.730720 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-28 01:11:40.730733 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-28 01:11:40.730747 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-28 01:11:40.730760 | orchestrator | 2026-03-28 01:11:40.730767 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-03-28 01:11:40.730773 | orchestrator | Saturday 28 March 2026 01:09:44 +0000 (0:00:01.765) 0:00:33.700 ******** 2026-03-28 01:11:40.730780 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:11:40.730787 | orchestrator | 2026-03-28 01:11:40.730793 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-03-28 01:11:40.730800 | orchestrator | Saturday 28 March 2026 01:09:44 +0000 (0:00:00.235) 0:00:33.935 ******** 2026-03-28 01:11:40.730807 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:11:40.730814 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:11:40.730820 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:11:40.730827 | orchestrator | 2026-03-28 01:11:40.730834 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-28 01:11:40.730841 | orchestrator | Saturday 28 March 2026 01:09:45 +0000 (0:00:00.340) 0:00:34.275 ******** 2026-03-28 01:11:40.730848 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:11:40.730854 | orchestrator | 2026-03-28 01:11:40.730861 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-03-28 01:11:40.730868 | orchestrator | Saturday 28 March 2026 01:09:46 +0000 (0:00:01.193) 0:00:35.468 ******** 2026-03-28 01:11:40.730875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-28 01:11:40.730886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-28 01:11:40.730905 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-28 01:11:40.730912 | orchestrator | 2026-03-28 01:11:40.730919 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-03-28 01:11:40.730926 | orchestrator | Saturday 28 March 2026 01:09:48 +0000 (0:00:01.758) 0:00:37.227 ******** 2026-03-28 01:11:40.730933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-28 01:11:40.730940 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:11:40.730952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-28 01:11:40.730959 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:11:40.730966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-28 01:11:40.730982 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:11:40.730989 | orchestrator | 2026-03-28 01:11:40.730996 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-03-28 01:11:40.731002 | orchestrator | Saturday 28 March 2026 01:09:48 +0000 (0:00:00.503) 0:00:37.730 ******** 2026-03-28 01:11:40.731010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-28 01:11:40.731017 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:11:40.731024 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-28 01:11:40.731032 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:11:40.731042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-28 01:11:40.731054 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:11:40.731061 | orchestrator | 2026-03-28 01:11:40.731068 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-03-28 01:11:40.731074 | orchestrator | Saturday 28 March 2026 01:09:49 +0000 (0:00:01.067) 0:00:38.798 ******** 2026-03-28 01:11:40.731087 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-28 01:11:40.731095 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-28 01:11:40.731103 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-28 01:11:40.731111 | orchestrator | 2026-03-28 01:11:40.731118 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-03-28 01:11:40.731125 | orchestrator | Saturday 28 March 2026 01:09:52 +0000 (0:00:02.417) 0:00:41.216 ******** 2026-03-28 01:11:40.731145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-28 01:11:40.731159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-28 01:11:40.731172 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-28 01:11:40.731190 | orchestrator | 2026-03-28 01:11:40.731203 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-03-28 01:11:40.731213 | orchestrator | Saturday 28 March 2026 01:09:55 +0000 (0:00:03.324) 0:00:44.541 ******** 2026-03-28 01:11:40.731224 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2)  2026-03-28 01:11:40.731234 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:11:40.731244 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2)  2026-03-28 01:11:40.731253 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:11:40.731263 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2)  2026-03-28 01:11:40.731274 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:11:40.731293 | orchestrator | 2026-03-28 01:11:40.731304 | orchestrator | TASK [Configure uWSGI for Placement] ******************************************* 2026-03-28 01:11:40.731316 | orchestrator | Saturday 28 March 2026 01:09:56 +0000 (0:00:00.554) 0:00:45.096 ******** 2026-03-28 01:11:40.731327 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:11:40.731338 | orchestrator | 2026-03-28 01:11:40.731347 | orchestrator | TASK [service-uwsgi-config : Copying over placement-api uWSGI config] ********** 2026-03-28 01:11:40.731353 | orchestrator | Saturday 28 March 2026 01:09:57 +0000 (0:00:01.240) 0:00:46.337 ******** 2026-03-28 01:11:40.731360 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:11:40.731367 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:11:40.731373 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:11:40.731380 | orchestrator | 2026-03-28 01:11:40.731392 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-03-28 01:11:40.731399 | orchestrator | Saturday 28 March 2026 01:09:59 +0000 (0:00:02.239) 0:00:48.577 ******** 2026-03-28 01:11:40.731405 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:11:40.731412 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:11:40.731419 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:11:40.731425 | orchestrator | 2026-03-28 01:11:40.731433 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-03-28 01:11:40.731445 | orchestrator | Saturday 28 March 2026 01:10:01 +0000 (0:00:01.461) 0:00:50.038 ******** 2026-03-28 01:11:40.731491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-28 01:11:40.731506 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:11:40.731518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-28 01:11:40.731530 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:11:40.731542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-28 01:11:40.731573 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:11:40.731580 | orchestrator | 2026-03-28 01:11:40.731587 | orchestrator | TASK [service-check-containers : placement | Check containers] ***************** 2026-03-28 01:11:40.731594 | orchestrator | Saturday 28 March 2026 01:10:02 +0000 (0:00:01.003) 0:00:51.042 ******** 2026-03-28 01:11:40.731605 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-28 01:11:40.731621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-28 01:11:40.731629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-28 01:11:40.731641 | orchestrator | 2026-03-28 01:11:40.731648 | orchestrator | TASK [service-check-containers : placement | Notify handlers to restart containers] *** 2026-03-28 01:11:40.731654 | orchestrator | Saturday 28 March 2026 01:10:03 +0000 (0:00:01.651) 0:00:52.693 ******** 2026-03-28 01:11:40.731661 | orchestrator | changed: [testbed-node-0] => { 2026-03-28 01:11:40.731668 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 01:11:40.731675 | orchestrator | } 2026-03-28 01:11:40.731681 | orchestrator | changed: [testbed-node-1] => { 2026-03-28 01:11:40.731689 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 01:11:40.731695 | orchestrator | } 2026-03-28 01:11:40.731702 | orchestrator | changed: [testbed-node-2] => { 2026-03-28 01:11:40.731708 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 01:11:40.731715 | orchestrator | } 2026-03-28 01:11:40.731722 | orchestrator | 2026-03-28 01:11:40.731729 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-28 01:11:40.731735 | orchestrator | Saturday 28 March 2026 01:10:04 +0000 (0:00:00.598) 0:00:53.292 ******** 2026-03-28 01:11:40.731746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-28 01:11:40.731753 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:11:40.731765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-28 01:11:40.731773 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:11:40.731780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-28 01:11:40.731791 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:11:40.731798 | orchestrator | 2026-03-28 01:11:40.731805 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-03-28 01:11:40.731812 | orchestrator | Saturday 28 March 2026 01:10:05 +0000 (0:00:01.274) 0:00:54.566 ******** 2026-03-28 01:11:40.731818 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:11:40.731825 | orchestrator | 2026-03-28 01:11:40.731831 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-03-28 01:11:40.731838 | orchestrator | Saturday 28 March 2026 01:10:08 +0000 (0:00:02.750) 0:00:57.317 ******** 2026-03-28 01:11:40.731844 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:11:40.731851 | orchestrator | 2026-03-28 01:11:40.731858 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-03-28 01:11:40.731865 | orchestrator | Saturday 28 March 2026 01:10:10 +0000 (0:00:02.626) 0:00:59.944 ******** 2026-03-28 01:11:40.731871 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:11:40.731878 | orchestrator | 2026-03-28 01:11:40.731885 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-28 01:11:40.731892 | orchestrator | Saturday 28 March 2026 01:10:28 +0000 (0:00:17.425) 0:01:17.370 ******** 2026-03-28 01:11:40.731898 | orchestrator | 2026-03-28 01:11:40.731905 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-28 01:11:40.731912 | orchestrator | Saturday 28 March 2026 01:10:28 +0000 (0:00:00.107) 0:01:17.477 ******** 2026-03-28 01:11:40.731919 | orchestrator | 2026-03-28 01:11:40.731925 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-28 01:11:40.731932 | orchestrator | Saturday 28 March 2026 01:10:28 +0000 (0:00:00.077) 0:01:17.554 ******** 2026-03-28 01:11:40.731938 | orchestrator | 2026-03-28 01:11:40.731945 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-03-28 01:11:40.731952 | orchestrator | Saturday 28 March 2026 01:10:28 +0000 (0:00:00.088) 0:01:17.643 ******** 2026-03-28 01:11:40.731959 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:11:40.731965 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:11:40.731972 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:11:40.731979 | orchestrator | 2026-03-28 01:11:40.731985 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:11:40.731996 | orchestrator | testbed-node-0 : ok=23  changed=16  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-28 01:11:40.732003 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-28 01:11:40.732010 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-28 01:11:40.732017 | orchestrator | 2026-03-28 01:11:40.732023 | orchestrator | 2026-03-28 01:11:40.732030 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:11:40.732037 | orchestrator | Saturday 28 March 2026 01:10:40 +0000 (0:00:11.744) 0:01:29.387 ******** 2026-03-28 01:11:40.732043 | orchestrator | =============================================================================== 2026-03-28 01:11:40.732050 | orchestrator | placement : Running placement bootstrap container ---------------------- 17.43s 2026-03-28 01:11:40.732056 | orchestrator | placement : Restart placement-api container ---------------------------- 11.74s 2026-03-28 01:11:40.732063 | orchestrator | service-ks-register : placement | Creating/deleting endpoints ----------- 7.70s 2026-03-28 01:11:40.732075 | orchestrator | service-ks-register : placement | Granting/revoking user roles ---------- 4.64s 2026-03-28 01:11:40.732082 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.57s 2026-03-28 01:11:40.732088 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 4.45s 2026-03-28 01:11:40.732099 | orchestrator | service-ks-register : placement | Creating projects --------------------- 4.23s 2026-03-28 01:11:40.732106 | orchestrator | service-ks-register : placement | Creating/deleting services ------------ 4.10s 2026-03-28 01:11:40.732113 | orchestrator | placement : Copying over placement.conf --------------------------------- 3.32s 2026-03-28 01:11:40.732119 | orchestrator | placement : Creating placement databases -------------------------------- 2.75s 2026-03-28 01:11:40.732126 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.63s 2026-03-28 01:11:40.732133 | orchestrator | placement : Copying over config.json files for services ----------------- 2.42s 2026-03-28 01:11:40.732140 | orchestrator | service-uwsgi-config : Copying over placement-api uWSGI config ---------- 2.24s 2026-03-28 01:11:40.732147 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.77s 2026-03-28 01:11:40.732154 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.76s 2026-03-28 01:11:40.732160 | orchestrator | service-check-containers : placement | Check containers ----------------- 1.65s 2026-03-28 01:11:40.732167 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.46s 2026-03-28 01:11:40.732174 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.27s 2026-03-28 01:11:40.732180 | orchestrator | Configure uWSGI for Placement ------------------------------------------- 1.24s 2026-03-28 01:11:40.732187 | orchestrator | placement : include_tasks ----------------------------------------------- 1.19s 2026-03-28 01:11:40.732194 | orchestrator | 2026-03-28 01:11:40 | INFO  | Task 2232375e-5828-4d73-a2ac-ed127e98f85a is in state SUCCESS 2026-03-28 01:11:40.732201 | orchestrator | 2026-03-28 01:11:40 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:11:43.770227 | orchestrator | 2026-03-28 01:11:43 | INFO  | Task bea646f3-e472-4eb0-9524-f46f1934e984 is in state STARTED 2026-03-28 01:11:43.772035 | orchestrator | 2026-03-28 01:11:43 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:11:43.774174 | orchestrator | 2026-03-28 01:11:43 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:11:43.774240 | orchestrator | 2026-03-28 01:11:43 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:11:46.816895 | orchestrator | 2026-03-28 01:11:46 | INFO  | Task bea646f3-e472-4eb0-9524-f46f1934e984 is in state STARTED 2026-03-28 01:11:46.818160 | orchestrator | 2026-03-28 01:11:46 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:11:46.821986 | orchestrator | 2026-03-28 01:11:46 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:11:46.822252 | orchestrator | 2026-03-28 01:11:46 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:11:49.860808 | orchestrator | 2026-03-28 01:11:49 | INFO  | Task bea646f3-e472-4eb0-9524-f46f1934e984 is in state STARTED 2026-03-28 01:11:49.861718 | orchestrator | 2026-03-28 01:11:49 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:11:49.862762 | orchestrator | 2026-03-28 01:11:49 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:11:49.862816 | orchestrator | 2026-03-28 01:11:49 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:11:52.897364 | orchestrator | 2026-03-28 01:11:52 | INFO  | Task bea646f3-e472-4eb0-9524-f46f1934e984 is in state SUCCESS 2026-03-28 01:11:52.898578 | orchestrator | 2026-03-28 01:11:52.898633 | orchestrator | 2026-03-28 01:11:52.898647 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 01:11:52.898694 | orchestrator | 2026-03-28 01:11:52.898784 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 01:11:52.898876 | orchestrator | Saturday 28 March 2026 01:09:29 +0000 (0:00:00.502) 0:00:00.502 ******** 2026-03-28 01:11:52.898890 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:11:52.898937 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:11:52.898948 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:11:52.898974 | orchestrator | 2026-03-28 01:11:52.898986 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 01:11:52.898997 | orchestrator | Saturday 28 March 2026 01:09:30 +0000 (0:00:00.310) 0:00:00.812 ******** 2026-03-28 01:11:52.899009 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-03-28 01:11:52.899028 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-03-28 01:11:52.899044 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-03-28 01:11:52.899064 | orchestrator | 2026-03-28 01:11:52.899079 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-03-28 01:11:52.899096 | orchestrator | 2026-03-28 01:11:52.899113 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-28 01:11:52.899130 | orchestrator | Saturday 28 March 2026 01:09:30 +0000 (0:00:00.584) 0:00:01.397 ******** 2026-03-28 01:11:52.899156 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:11:52.899178 | orchestrator | 2026-03-28 01:11:52.899196 | orchestrator | TASK [service-ks-register : magnum | Creating/deleting services] *************** 2026-03-28 01:11:52.899213 | orchestrator | Saturday 28 March 2026 01:09:31 +0000 (0:00:01.023) 0:00:02.421 ******** 2026-03-28 01:11:52.899231 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-03-28 01:11:52.899250 | orchestrator | 2026-03-28 01:11:52.899270 | orchestrator | TASK [service-ks-register : magnum | Creating/deleting endpoints] ************** 2026-03-28 01:11:52.899288 | orchestrator | Saturday 28 March 2026 01:09:36 +0000 (0:00:04.668) 0:00:07.089 ******** 2026-03-28 01:11:52.899307 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-03-28 01:11:52.899326 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-03-28 01:11:52.899344 | orchestrator | 2026-03-28 01:11:52.899361 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-03-28 01:11:52.899379 | orchestrator | Saturday 28 March 2026 01:09:44 +0000 (0:00:08.346) 0:00:15.436 ******** 2026-03-28 01:11:52.899398 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-28 01:11:52.899415 | orchestrator | 2026-03-28 01:11:52.899433 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-03-28 01:11:52.899486 | orchestrator | Saturday 28 March 2026 01:09:48 +0000 (0:00:04.011) 0:00:19.447 ******** 2026-03-28 01:11:52.899504 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-03-28 01:11:52.899524 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-28 01:11:52.899543 | orchestrator | 2026-03-28 01:11:52.899561 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-03-28 01:11:52.899579 | orchestrator | Saturday 28 March 2026 01:09:53 +0000 (0:00:04.599) 0:00:24.046 ******** 2026-03-28 01:11:52.899598 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-28 01:11:52.899615 | orchestrator | 2026-03-28 01:11:52.899633 | orchestrator | TASK [service-ks-register : magnum | Granting/revoking user roles] ************* 2026-03-28 01:11:52.899652 | orchestrator | Saturday 28 March 2026 01:09:57 +0000 (0:00:04.106) 0:00:28.153 ******** 2026-03-28 01:11:52.899671 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-03-28 01:11:52.899689 | orchestrator | 2026-03-28 01:11:52.899707 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-03-28 01:11:52.899725 | orchestrator | Saturday 28 March 2026 01:10:02 +0000 (0:00:04.756) 0:00:32.909 ******** 2026-03-28 01:11:52.899775 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:11:52.899794 | orchestrator | 2026-03-28 01:11:52.899811 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-03-28 01:11:52.899829 | orchestrator | Saturday 28 March 2026 01:10:06 +0000 (0:00:03.988) 0:00:36.898 ******** 2026-03-28 01:11:52.899846 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:11:52.899861 | orchestrator | 2026-03-28 01:11:52.899877 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-03-28 01:11:52.899893 | orchestrator | Saturday 28 March 2026 01:10:10 +0000 (0:00:04.529) 0:00:41.427 ******** 2026-03-28 01:11:52.899910 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:11:52.899926 | orchestrator | 2026-03-28 01:11:52.899942 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-03-28 01:11:52.899959 | orchestrator | Saturday 28 March 2026 01:10:15 +0000 (0:00:04.695) 0:00:46.123 ******** 2026-03-28 01:11:52.900061 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:11:52.900090 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:11:52.900111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:52.900124 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:11:52.900148 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:52.900174 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:52.900186 | orchestrator | 2026-03-28 01:11:52.900197 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-03-28 01:11:52.900209 | orchestrator | Saturday 28 March 2026 01:10:17 +0000 (0:00:01.876) 0:00:48.000 ******** 2026-03-28 01:11:52.900220 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:11:52.900239 | orchestrator | 2026-03-28 01:11:52.900252 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-03-28 01:11:52.900263 | orchestrator | Saturday 28 March 2026 01:10:17 +0000 (0:00:00.147) 0:00:48.148 ******** 2026-03-28 01:11:52.900273 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:11:52.900284 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:11:52.900300 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:11:52.900325 | orchestrator | 2026-03-28 01:11:52.900348 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-03-28 01:11:52.900365 | orchestrator | Saturday 28 March 2026 01:10:17 +0000 (0:00:00.329) 0:00:48.477 ******** 2026-03-28 01:11:52.900383 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 01:11:52.900399 | orchestrator | 2026-03-28 01:11:52.900418 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-03-28 01:11:52.900555 | orchestrator | Saturday 28 March 2026 01:10:18 +0000 (0:00:01.103) 0:00:49.580 ******** 2026-03-28 01:11:52.900575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:11:52.900601 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:11:52.900634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:11:52.900653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:52.900669 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:52.900688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:52.900700 | orchestrator | 2026-03-28 01:11:52.900713 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-03-28 01:11:52.900732 | orchestrator | Saturday 28 March 2026 01:10:22 +0000 (0:00:03.361) 0:00:52.941 ******** 2026-03-28 01:11:52.900751 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:11:52.900769 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:11:52.900786 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:11:52.900804 | orchestrator | 2026-03-28 01:11:52.900822 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-28 01:11:52.900840 | orchestrator | Saturday 28 March 2026 01:10:22 +0000 (0:00:00.555) 0:00:53.497 ******** 2026-03-28 01:11:52.900861 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:11:52.900879 | orchestrator | 2026-03-28 01:11:52.900896 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-03-28 01:11:52.900912 | orchestrator | Saturday 28 March 2026 01:10:23 +0000 (0:00:00.649) 0:00:54.146 ******** 2026-03-28 01:11:52.900940 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:11:52.900966 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:11:52.900986 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:11:52.901016 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:52.901033 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:52.901067 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:52.901085 | orchestrator | 2026-03-28 01:11:52.901101 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-03-28 01:11:52.901118 | orchestrator | Saturday 28 March 2026 01:10:26 +0000 (0:00:02.762) 0:00:56.908 ******** 2026-03-28 01:11:52.901135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:11:52.901165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:11:52.901183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-28 01:11:52.901201 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:11:52.901218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-28 01:11:52.901234 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:11:52.901268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:11:52.901336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-28 01:11:52.901353 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:11:52.901370 | orchestrator | 2026-03-28 01:11:52.901386 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-03-28 01:11:52.901403 | orchestrator | Saturday 28 March 2026 01:10:28 +0000 (0:00:02.589) 0:00:59.498 ******** 2026-03-28 01:11:52.901420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:11:52.901464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-28 01:11:52.901483 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:11:52.901523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:11:52.901551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:11:52.901569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-28 01:11:52.901584 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:11:52.901602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-28 01:11:52.901618 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:11:52.901633 | orchestrator | 2026-03-28 01:11:52.901649 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-03-28 01:11:52.901664 | orchestrator | Saturday 28 March 2026 01:10:30 +0000 (0:00:01.495) 0:01:00.993 ******** 2026-03-28 01:11:52.902172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:11:52.902215 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:11:52.902226 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:11:52.902237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:52.902248 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:52.902275 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:52.902296 | orchestrator | 2026-03-28 01:11:52.902312 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-03-28 01:11:52.902328 | orchestrator | Saturday 28 March 2026 01:10:33 +0000 (0:00:03.033) 0:01:04.027 ******** 2026-03-28 01:11:52.902345 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:11:52.902364 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:11:52.902382 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:11:52.902414 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:52.902462 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:52.902481 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:52.902497 | orchestrator | 2026-03-28 01:11:52.902508 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-03-28 01:11:52.902524 | orchestrator | Saturday 28 March 2026 01:10:40 +0000 (0:00:07.038) 0:01:11.065 ******** 2026-03-28 01:11:52.902535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:11:52.902546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-28 01:11:52.902556 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:11:52.902579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:11:52.902596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-28 01:11:52.902606 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:11:52.902617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:11:52.902627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-28 01:11:52.902637 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:11:52.902647 | orchestrator | 2026-03-28 01:11:52.902657 | orchestrator | TASK [service-check-containers : magnum | Check containers] ******************** 2026-03-28 01:11:52.902666 | orchestrator | Saturday 28 March 2026 01:10:41 +0000 (0:00:01.090) 0:01:12.155 ******** 2026-03-28 01:11:52.902687 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:11:52.902704 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:11:52.902715 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:11:52.902726 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:52.902736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:52.902764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:11:52.902776 | orchestrator | 2026-03-28 01:11:52.902787 | orchestrator | TASK [service-check-containers : magnum | Notify handlers to restart containers] *** 2026-03-28 01:11:52.902799 | orchestrator | Saturday 28 March 2026 01:10:45 +0000 (0:00:03.777) 0:01:15.933 ******** 2026-03-28 01:11:52.902810 | orchestrator | changed: [testbed-node-0] => { 2026-03-28 01:11:52.902821 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 01:11:52.902833 | orchestrator | } 2026-03-28 01:11:52.902844 | orchestrator | changed: [testbed-node-1] => { 2026-03-28 01:11:52.902854 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 01:11:52.902865 | orchestrator | } 2026-03-28 01:11:52.902876 | orchestrator | changed: [testbed-node-2] => { 2026-03-28 01:11:52.902887 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 01:11:52.902902 | orchestrator | } 2026-03-28 01:11:52.902918 | orchestrator | 2026-03-28 01:11:52.902934 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-28 01:11:52.902949 | orchestrator | Saturday 28 March 2026 01:10:45 +0000 (0:00:00.369) 0:01:16.302 ******** 2026-03-28 01:11:52.902966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:11:52.902988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-28 01:11:52.903001 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:11:52.903020 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:11:52.903051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-28 01:11:52.903062 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:11:52.903073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:11:52.903083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-28 01:11:52.903093 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:11:52.903103 | orchestrator | 2026-03-28 01:11:52.903112 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-28 01:11:52.903122 | orchestrator | Saturday 28 March 2026 01:10:47 +0000 (0:00:02.100) 0:01:18.402 ******** 2026-03-28 01:11:52.903131 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:11:52.903148 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:11:52.903158 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:11:52.903167 | orchestrator | 2026-03-28 01:11:52.903177 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-03-28 01:11:52.903187 | orchestrator | Saturday 28 March 2026 01:10:48 +0000 (0:00:00.664) 0:01:19.067 ******** 2026-03-28 01:11:52.903196 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:11:52.903206 | orchestrator | 2026-03-28 01:11:52.903222 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-03-28 01:11:52.903233 | orchestrator | Saturday 28 March 2026 01:10:51 +0000 (0:00:02.834) 0:01:21.902 ******** 2026-03-28 01:11:52.903242 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:11:52.903252 | orchestrator | 2026-03-28 01:11:52.903261 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-03-28 01:11:52.903270 | orchestrator | Saturday 28 March 2026 01:10:54 +0000 (0:00:02.895) 0:01:24.797 ******** 2026-03-28 01:11:52.903279 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:11:52.903289 | orchestrator | 2026-03-28 01:11:52.903298 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-28 01:11:52.903307 | orchestrator | Saturday 28 March 2026 01:11:12 +0000 (0:00:18.768) 0:01:43.565 ******** 2026-03-28 01:11:52.903317 | orchestrator | 2026-03-28 01:11:52.903326 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-28 01:11:52.903336 | orchestrator | Saturday 28 March 2026 01:11:12 +0000 (0:00:00.074) 0:01:43.640 ******** 2026-03-28 01:11:52.903345 | orchestrator | 2026-03-28 01:11:52.903354 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-28 01:11:52.903364 | orchestrator | Saturday 28 March 2026 01:11:13 +0000 (0:00:00.111) 0:01:43.751 ******** 2026-03-28 01:11:52.903373 | orchestrator | 2026-03-28 01:11:52.903382 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-03-28 01:11:52.903392 | orchestrator | Saturday 28 March 2026 01:11:13 +0000 (0:00:00.094) 0:01:43.846 ******** 2026-03-28 01:11:52.903401 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:11:52.903411 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:11:52.903420 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:11:52.903430 | orchestrator | 2026-03-28 01:11:52.903468 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-03-28 01:11:52.903479 | orchestrator | Saturday 28 March 2026 01:11:33 +0000 (0:00:20.767) 0:02:04.614 ******** 2026-03-28 01:11:52.903489 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:11:52.903504 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:11:52.903514 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:11:52.903523 | orchestrator | 2026-03-28 01:11:52.903533 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:11:52.903548 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-28 01:11:52.903559 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-28 01:11:52.903569 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-28 01:11:52.903578 | orchestrator | 2026-03-28 01:11:52.903587 | orchestrator | 2026-03-28 01:11:52.903597 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:11:52.903606 | orchestrator | Saturday 28 March 2026 01:11:51 +0000 (0:00:17.525) 0:02:22.139 ******** 2026-03-28 01:11:52.903616 | orchestrator | =============================================================================== 2026-03-28 01:11:52.903625 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 20.77s 2026-03-28 01:11:52.903635 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 18.77s 2026-03-28 01:11:52.903644 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 17.53s 2026-03-28 01:11:52.903660 | orchestrator | service-ks-register : magnum | Creating/deleting endpoints -------------- 8.35s 2026-03-28 01:11:52.903669 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 7.04s 2026-03-28 01:11:52.903679 | orchestrator | service-ks-register : magnum | Granting/revoking user roles ------------- 4.75s 2026-03-28 01:11:52.903688 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 4.70s 2026-03-28 01:11:52.903697 | orchestrator | service-ks-register : magnum | Creating/deleting services --------------- 4.67s 2026-03-28 01:11:52.903707 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.60s 2026-03-28 01:11:52.903716 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.53s 2026-03-28 01:11:52.903726 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 4.11s 2026-03-28 01:11:52.903735 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 4.01s 2026-03-28 01:11:52.903744 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.99s 2026-03-28 01:11:52.903754 | orchestrator | service-check-containers : magnum | Check containers -------------------- 3.78s 2026-03-28 01:11:52.903763 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 3.36s 2026-03-28 01:11:52.903772 | orchestrator | magnum : Copying over config.json files for services -------------------- 3.03s 2026-03-28 01:11:52.903782 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.89s 2026-03-28 01:11:52.903791 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.83s 2026-03-28 01:11:52.903801 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.76s 2026-03-28 01:11:52.903810 | orchestrator | service-cert-copy : magnum | Copying over backend internal TLS certificate --- 2.59s 2026-03-28 01:11:52.903820 | orchestrator | 2026-03-28 01:11:52 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:11:52.903829 | orchestrator | 2026-03-28 01:11:52 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:11:52.903839 | orchestrator | 2026-03-28 01:11:52 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:11:55.941142 | orchestrator | 2026-03-28 01:11:55 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:11:55.941997 | orchestrator | 2026-03-28 01:11:55 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:11:55.942102 | orchestrator | 2026-03-28 01:11:55 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:11:58.980273 | orchestrator | 2026-03-28 01:11:58 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:11:58.981316 | orchestrator | 2026-03-28 01:11:58 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:11:58.981351 | orchestrator | 2026-03-28 01:11:58 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:12:02.019772 | orchestrator | 2026-03-28 01:12:02 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:12:02.021189 | orchestrator | 2026-03-28 01:12:02 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:12:02.021253 | orchestrator | 2026-03-28 01:12:02 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:12:05.060547 | orchestrator | 2026-03-28 01:12:05 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:12:05.063597 | orchestrator | 2026-03-28 01:12:05 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:12:05.063670 | orchestrator | 2026-03-28 01:12:05 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:12:08.101203 | orchestrator | 2026-03-28 01:12:08 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:12:08.102257 | orchestrator | 2026-03-28 01:12:08 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:12:08.102309 | orchestrator | 2026-03-28 01:12:08 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:12:11.142226 | orchestrator | 2026-03-28 01:12:11 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:12:11.144169 | orchestrator | 2026-03-28 01:12:11 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:12:11.144223 | orchestrator | 2026-03-28 01:12:11 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:12:14.179710 | orchestrator | 2026-03-28 01:12:14 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:12:14.180680 | orchestrator | 2026-03-28 01:12:14 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:12:14.180724 | orchestrator | 2026-03-28 01:12:14 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:12:17.215919 | orchestrator | 2026-03-28 01:12:17 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:12:17.216125 | orchestrator | 2026-03-28 01:12:17 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:12:17.216141 | orchestrator | 2026-03-28 01:12:17 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:12:20.262972 | orchestrator | 2026-03-28 01:12:20 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:12:20.264007 | orchestrator | 2026-03-28 01:12:20 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:12:20.264069 | orchestrator | 2026-03-28 01:12:20 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:12:23.302231 | orchestrator | 2026-03-28 01:12:23 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:12:23.303766 | orchestrator | 2026-03-28 01:12:23 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:12:23.304333 | orchestrator | 2026-03-28 01:12:23 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:12:26.349475 | orchestrator | 2026-03-28 01:12:26 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:12:26.351157 | orchestrator | 2026-03-28 01:12:26 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:12:26.351226 | orchestrator | 2026-03-28 01:12:26 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:12:29.387154 | orchestrator | 2026-03-28 01:12:29 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:12:29.387887 | orchestrator | 2026-03-28 01:12:29 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:12:29.387908 | orchestrator | 2026-03-28 01:12:29 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:12:32.425940 | orchestrator | 2026-03-28 01:12:32 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:12:32.427367 | orchestrator | 2026-03-28 01:12:32 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:12:32.427455 | orchestrator | 2026-03-28 01:12:32 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:12:35.462275 | orchestrator | 2026-03-28 01:12:35 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:12:35.463973 | orchestrator | 2026-03-28 01:12:35 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:12:35.464019 | orchestrator | 2026-03-28 01:12:35 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:12:38.502754 | orchestrator | 2026-03-28 01:12:38 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:12:38.506653 | orchestrator | 2026-03-28 01:12:38 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:12:38.506770 | orchestrator | 2026-03-28 01:12:38 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:12:41.554774 | orchestrator | 2026-03-28 01:12:41 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:12:41.556883 | orchestrator | 2026-03-28 01:12:41 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:12:41.556925 | orchestrator | 2026-03-28 01:12:41 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:12:44.606821 | orchestrator | 2026-03-28 01:12:44 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:12:44.609563 | orchestrator | 2026-03-28 01:12:44 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:12:44.609641 | orchestrator | 2026-03-28 01:12:44 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:12:47.657445 | orchestrator | 2026-03-28 01:12:47 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:12:47.658589 | orchestrator | 2026-03-28 01:12:47 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:12:47.658621 | orchestrator | 2026-03-28 01:12:47 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:12:50.703515 | orchestrator | 2026-03-28 01:12:50 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:12:50.706586 | orchestrator | 2026-03-28 01:12:50 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:12:50.706667 | orchestrator | 2026-03-28 01:12:50 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:12:53.755835 | orchestrator | 2026-03-28 01:12:53 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:12:53.756733 | orchestrator | 2026-03-28 01:12:53 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:12:53.756774 | orchestrator | 2026-03-28 01:12:53 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:12:56.805488 | orchestrator | 2026-03-28 01:12:56 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:12:56.808580 | orchestrator | 2026-03-28 01:12:56 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:12:56.808657 | orchestrator | 2026-03-28 01:12:56 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:12:59.854789 | orchestrator | 2026-03-28 01:12:59 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:12:59.856038 | orchestrator | 2026-03-28 01:12:59 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:12:59.856082 | orchestrator | 2026-03-28 01:12:59 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:13:02.903401 | orchestrator | 2026-03-28 01:13:02 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:13:02.905045 | orchestrator | 2026-03-28 01:13:02 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:13:02.905108 | orchestrator | 2026-03-28 01:13:02 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:13:05.962482 | orchestrator | 2026-03-28 01:13:05 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:13:05.965186 | orchestrator | 2026-03-28 01:13:05 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:13:05.965252 | orchestrator | 2026-03-28 01:13:05 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:13:09.013971 | orchestrator | 2026-03-28 01:13:09 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:13:09.015608 | orchestrator | 2026-03-28 01:13:09 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:13:09.015658 | orchestrator | 2026-03-28 01:13:09 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:13:12.058440 | orchestrator | 2026-03-28 01:13:12 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:13:12.059306 | orchestrator | 2026-03-28 01:13:12 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:13:12.059415 | orchestrator | 2026-03-28 01:13:12 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:13:15.097749 | orchestrator | 2026-03-28 01:13:15 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:13:15.098466 | orchestrator | 2026-03-28 01:13:15 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:13:15.098493 | orchestrator | 2026-03-28 01:13:15 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:13:18.142606 | orchestrator | 2026-03-28 01:13:18 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:13:18.143953 | orchestrator | 2026-03-28 01:13:18 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:13:18.144000 | orchestrator | 2026-03-28 01:13:18 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:13:21.187048 | orchestrator | 2026-03-28 01:13:21 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:13:21.187159 | orchestrator | 2026-03-28 01:13:21 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:13:21.187174 | orchestrator | 2026-03-28 01:13:21 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:13:24.234560 | orchestrator | 2026-03-28 01:13:24 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:13:24.236203 | orchestrator | 2026-03-28 01:13:24 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:13:24.236245 | orchestrator | 2026-03-28 01:13:24 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:13:27.286157 | orchestrator | 2026-03-28 01:13:27 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:13:27.286966 | orchestrator | 2026-03-28 01:13:27 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:13:27.287020 | orchestrator | 2026-03-28 01:13:27 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:13:30.341136 | orchestrator | 2026-03-28 01:13:30 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:13:30.343658 | orchestrator | 2026-03-28 01:13:30 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:13:30.343718 | orchestrator | 2026-03-28 01:13:30 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:13:33.397035 | orchestrator | 2026-03-28 01:13:33 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:13:33.398419 | orchestrator | 2026-03-28 01:13:33 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:13:33.398491 | orchestrator | 2026-03-28 01:13:33 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:13:36.437039 | orchestrator | 2026-03-28 01:13:36 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:13:36.438379 | orchestrator | 2026-03-28 01:13:36 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:13:36.438418 | orchestrator | 2026-03-28 01:13:36 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:13:39.473819 | orchestrator | 2026-03-28 01:13:39 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:13:39.474805 | orchestrator | 2026-03-28 01:13:39 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:13:39.474836 | orchestrator | 2026-03-28 01:13:39 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:13:42.522213 | orchestrator | 2026-03-28 01:13:42 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:13:42.522934 | orchestrator | 2026-03-28 01:13:42 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:13:42.522988 | orchestrator | 2026-03-28 01:13:42 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:13:45.613761 | orchestrator | 2026-03-28 01:13:45 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:13:45.615037 | orchestrator | 2026-03-28 01:13:45 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:13:45.615085 | orchestrator | 2026-03-28 01:13:45 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:13:48.649187 | orchestrator | 2026-03-28 01:13:48 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:13:48.650411 | orchestrator | 2026-03-28 01:13:48 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:13:48.650479 | orchestrator | 2026-03-28 01:13:48 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:13:51.688718 | orchestrator | 2026-03-28 01:13:51 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:13:51.689679 | orchestrator | 2026-03-28 01:13:51 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:13:51.689728 | orchestrator | 2026-03-28 01:13:51 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:13:54.725827 | orchestrator | 2026-03-28 01:13:54 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:13:54.726532 | orchestrator | 2026-03-28 01:13:54 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:13:54.726592 | orchestrator | 2026-03-28 01:13:54 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:13:57.762819 | orchestrator | 2026-03-28 01:13:57 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:13:57.764863 | orchestrator | 2026-03-28 01:13:57 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:13:57.764946 | orchestrator | 2026-03-28 01:13:57 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:14:00.809980 | orchestrator | 2026-03-28 01:14:00 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:14:00.810580 | orchestrator | 2026-03-28 01:14:00 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:14:00.810658 | orchestrator | 2026-03-28 01:14:00 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:14:03.851985 | orchestrator | 2026-03-28 01:14:03 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:14:03.853819 | orchestrator | 2026-03-28 01:14:03 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:14:03.853870 | orchestrator | 2026-03-28 01:14:03 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:14:06.895678 | orchestrator | 2026-03-28 01:14:06 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:14:06.897357 | orchestrator | 2026-03-28 01:14:06 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:14:06.897684 | orchestrator | 2026-03-28 01:14:06 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:14:09.935830 | orchestrator | 2026-03-28 01:14:09 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:14:09.938181 | orchestrator | 2026-03-28 01:14:09 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:14:09.938229 | orchestrator | 2026-03-28 01:14:09 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:14:12.980695 | orchestrator | 2026-03-28 01:14:12 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:14:12.982992 | orchestrator | 2026-03-28 01:14:12 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:14:12.983040 | orchestrator | 2026-03-28 01:14:12 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:14:16.022816 | orchestrator | 2026-03-28 01:14:16 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:14:16.023772 | orchestrator | 2026-03-28 01:14:16 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:14:16.023814 | orchestrator | 2026-03-28 01:14:16 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:14:19.064189 | orchestrator | 2026-03-28 01:14:19 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:14:19.065601 | orchestrator | 2026-03-28 01:14:19 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:14:19.065665 | orchestrator | 2026-03-28 01:14:19 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:14:22.129173 | orchestrator | 2026-03-28 01:14:22 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:14:22.130205 | orchestrator | 2026-03-28 01:14:22 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:14:22.130242 | orchestrator | 2026-03-28 01:14:22 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:14:25.188635 | orchestrator | 2026-03-28 01:14:25 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:14:25.190735 | orchestrator | 2026-03-28 01:14:25 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:14:25.190804 | orchestrator | 2026-03-28 01:14:25 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:14:28.234983 | orchestrator | 2026-03-28 01:14:28 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:14:28.235171 | orchestrator | 2026-03-28 01:14:28 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:14:28.235190 | orchestrator | 2026-03-28 01:14:28 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:14:31.278835 | orchestrator | 2026-03-28 01:14:31 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:14:31.279977 | orchestrator | 2026-03-28 01:14:31 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:14:31.280048 | orchestrator | 2026-03-28 01:14:31 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:14:34.323377 | orchestrator | 2026-03-28 01:14:34 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:14:34.324383 | orchestrator | 2026-03-28 01:14:34 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:14:34.324422 | orchestrator | 2026-03-28 01:14:34 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:14:37.378383 | orchestrator | 2026-03-28 01:14:37 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state STARTED 2026-03-28 01:14:37.382386 | orchestrator | 2026-03-28 01:14:37 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:14:37.382453 | orchestrator | 2026-03-28 01:14:37 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:14:40.422506 | orchestrator | 2026-03-28 01:14:40 | INFO  | Task 798ce0e8-8fa5-42ec-a3ea-2183a2a0b41c is in state SUCCESS 2026-03-28 01:14:40.424301 | orchestrator | 2026-03-28 01:14:40.424359 | orchestrator | 2026-03-28 01:14:40.424371 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 01:14:40.424383 | orchestrator | 2026-03-28 01:14:40.424390 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-03-28 01:14:40.424396 | orchestrator | Saturday 28 March 2026 01:03:01 +0000 (0:00:00.409) 0:00:00.409 ******** 2026-03-28 01:14:40.424402 | orchestrator | changed: [testbed-manager] 2026-03-28 01:14:40.424409 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:14:40.424414 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:14:40.424419 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:14:40.424425 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:14:40.424430 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:14:40.424435 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:14:40.424441 | orchestrator | 2026-03-28 01:14:40.424446 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 01:14:40.424451 | orchestrator | Saturday 28 March 2026 01:03:02 +0000 (0:00:00.771) 0:00:01.181 ******** 2026-03-28 01:14:40.424457 | orchestrator | changed: [testbed-manager] 2026-03-28 01:14:40.424462 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:14:40.424467 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:14:40.424472 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:14:40.424477 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:14:40.424482 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:14:40.424487 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:14:40.424492 | orchestrator | 2026-03-28 01:14:40.424497 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 01:14:40.424502 | orchestrator | Saturday 28 March 2026 01:03:03 +0000 (0:00:00.990) 0:00:02.171 ******** 2026-03-28 01:14:40.424507 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-03-28 01:14:40.424513 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-03-28 01:14:40.424518 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-03-28 01:14:40.424523 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-03-28 01:14:40.424528 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-03-28 01:14:40.424533 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-03-28 01:14:40.424538 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-03-28 01:14:40.424543 | orchestrator | 2026-03-28 01:14:40.424548 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-03-28 01:14:40.424553 | orchestrator | 2026-03-28 01:14:40.424558 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-28 01:14:40.424563 | orchestrator | Saturday 28 March 2026 01:03:04 +0000 (0:00:01.478) 0:00:03.649 ******** 2026-03-28 01:14:40.424569 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:14:40.424574 | orchestrator | 2026-03-28 01:14:40.424588 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-03-28 01:14:40.424597 | orchestrator | Saturday 28 March 2026 01:03:06 +0000 (0:00:01.479) 0:00:05.129 ******** 2026-03-28 01:14:40.424616 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-03-28 01:14:40.424628 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-03-28 01:14:40.424659 | orchestrator | 2026-03-28 01:14:40.424722 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-03-28 01:14:40.424731 | orchestrator | Saturday 28 March 2026 01:03:12 +0000 (0:00:05.705) 0:00:10.835 ******** 2026-03-28 01:14:40.424739 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-28 01:14:40.424744 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-28 01:14:40.424749 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:14:40.424754 | orchestrator | 2026-03-28 01:14:40.424759 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-28 01:14:40.424765 | orchestrator | Saturday 28 March 2026 01:03:16 +0000 (0:00:04.762) 0:00:15.597 ******** 2026-03-28 01:14:40.424770 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:14:40.424775 | orchestrator | 2026-03-28 01:14:40.424780 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-03-28 01:14:40.424785 | orchestrator | Saturday 28 March 2026 01:03:17 +0000 (0:00:00.707) 0:00:16.304 ******** 2026-03-28 01:14:40.424790 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:14:40.424794 | orchestrator | 2026-03-28 01:14:40.424800 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-03-28 01:14:40.424804 | orchestrator | Saturday 28 March 2026 01:03:19 +0000 (0:00:01.670) 0:00:17.975 ******** 2026-03-28 01:14:40.424809 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:14:40.424814 | orchestrator | 2026-03-28 01:14:40.424819 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-28 01:14:40.424824 | orchestrator | Saturday 28 March 2026 01:03:22 +0000 (0:00:03.252) 0:00:21.228 ******** 2026-03-28 01:14:40.424829 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:40.424834 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:40.424839 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:40.424844 | orchestrator | 2026-03-28 01:14:40.424850 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-28 01:14:40.424856 | orchestrator | Saturday 28 March 2026 01:03:23 +0000 (0:00:01.115) 0:00:22.343 ******** 2026-03-28 01:14:40.424862 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:14:40.424868 | orchestrator | 2026-03-28 01:14:40.424874 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-03-28 01:14:40.424890 | orchestrator | Saturday 28 March 2026 01:03:59 +0000 (0:00:35.814) 0:00:58.158 ******** 2026-03-28 01:14:40.424896 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:14:40.424902 | orchestrator | 2026-03-28 01:14:40.424908 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-28 01:14:40.424913 | orchestrator | Saturday 28 March 2026 01:04:15 +0000 (0:00:16.454) 0:01:14.612 ******** 2026-03-28 01:14:40.424919 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:14:40.424925 | orchestrator | 2026-03-28 01:14:40.424930 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-28 01:14:40.424936 | orchestrator | Saturday 28 March 2026 01:04:30 +0000 (0:00:14.431) 0:01:29.044 ******** 2026-03-28 01:14:40.425027 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:14:40.425037 | orchestrator | 2026-03-28 01:14:40.425042 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-03-28 01:14:40.425048 | orchestrator | Saturday 28 March 2026 01:04:31 +0000 (0:00:00.838) 0:01:29.882 ******** 2026-03-28 01:14:40.425054 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:40.425060 | orchestrator | 2026-03-28 01:14:40.425078 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-28 01:14:40.425083 | orchestrator | Saturday 28 March 2026 01:04:31 +0000 (0:00:00.819) 0:01:30.701 ******** 2026-03-28 01:14:40.425089 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:14:40.425094 | orchestrator | 2026-03-28 01:14:40.425099 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-28 01:14:40.425104 | orchestrator | Saturday 28 March 2026 01:04:32 +0000 (0:00:00.737) 0:01:31.439 ******** 2026-03-28 01:14:40.425117 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:14:40.425122 | orchestrator | 2026-03-28 01:14:40.425127 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-28 01:14:40.425132 | orchestrator | Saturday 28 March 2026 01:04:52 +0000 (0:00:20.330) 0:01:51.769 ******** 2026-03-28 01:14:40.425137 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:40.425142 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:40.425147 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:40.425152 | orchestrator | 2026-03-28 01:14:40.425157 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-03-28 01:14:40.425162 | orchestrator | 2026-03-28 01:14:40.425167 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-28 01:14:40.425172 | orchestrator | Saturday 28 March 2026 01:04:53 +0000 (0:00:00.385) 0:01:52.155 ******** 2026-03-28 01:14:40.425177 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:14:40.425182 | orchestrator | 2026-03-28 01:14:40.425187 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-03-28 01:14:40.425192 | orchestrator | Saturday 28 March 2026 01:04:54 +0000 (0:00:01.053) 0:01:53.208 ******** 2026-03-28 01:14:40.425197 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:40.425202 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:40.425207 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:14:40.425212 | orchestrator | 2026-03-28 01:14:40.425247 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-03-28 01:14:40.425256 | orchestrator | Saturday 28 March 2026 01:04:57 +0000 (0:00:02.853) 0:01:56.062 ******** 2026-03-28 01:14:40.425263 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:40.425271 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:40.425278 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:14:40.425286 | orchestrator | 2026-03-28 01:14:40.425295 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-28 01:14:40.425310 | orchestrator | Saturday 28 March 2026 01:04:59 +0000 (0:00:02.685) 0:01:58.747 ******** 2026-03-28 01:14:40.425318 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:40.425324 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:40.425329 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:40.425334 | orchestrator | 2026-03-28 01:14:40.425339 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-28 01:14:40.425344 | orchestrator | Saturday 28 March 2026 01:05:01 +0000 (0:00:01.820) 0:02:00.567 ******** 2026-03-28 01:14:40.425349 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-28 01:14:40.425354 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:40.425359 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-28 01:14:40.425367 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:40.425379 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-28 01:14:40.425391 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-03-28 01:14:40.425399 | orchestrator | 2026-03-28 01:14:40.425407 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-28 01:14:40.425416 | orchestrator | Saturday 28 March 2026 01:05:16 +0000 (0:00:15.205) 0:02:15.773 ******** 2026-03-28 01:14:40.425423 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:40.425431 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:40.425439 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:40.425446 | orchestrator | 2026-03-28 01:14:40.425454 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-28 01:14:40.425463 | orchestrator | Saturday 28 March 2026 01:05:17 +0000 (0:00:00.658) 0:02:16.432 ******** 2026-03-28 01:14:40.425471 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-28 01:14:40.425479 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:40.425487 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-28 01:14:40.425504 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:40.425512 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-28 01:14:40.425519 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:40.425527 | orchestrator | 2026-03-28 01:14:40.425535 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-28 01:14:40.425544 | orchestrator | Saturday 28 March 2026 01:05:20 +0000 (0:00:03.037) 0:02:19.470 ******** 2026-03-28 01:14:40.425553 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:40.425560 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:40.425569 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:14:40.425576 | orchestrator | 2026-03-28 01:14:40.425586 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-03-28 01:14:40.425592 | orchestrator | Saturday 28 March 2026 01:05:21 +0000 (0:00:00.611) 0:02:20.081 ******** 2026-03-28 01:14:40.425597 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:40.425602 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:40.425607 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:14:40.425611 | orchestrator | 2026-03-28 01:14:40.425616 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-03-28 01:14:40.425621 | orchestrator | Saturday 28 March 2026 01:05:22 +0000 (0:00:01.432) 0:02:21.514 ******** 2026-03-28 01:14:40.425627 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:40.425632 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:40.425645 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:14:40.425650 | orchestrator | 2026-03-28 01:14:40.425655 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-03-28 01:14:40.425660 | orchestrator | Saturday 28 March 2026 01:05:27 +0000 (0:00:04.453) 0:02:25.967 ******** 2026-03-28 01:14:40.425665 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:40.425687 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:40.425693 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:14:40.425738 | orchestrator | 2026-03-28 01:14:40.425744 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-28 01:14:40.425749 | orchestrator | Saturday 28 March 2026 01:05:53 +0000 (0:00:25.924) 0:02:51.892 ******** 2026-03-28 01:14:40.425754 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:40.425759 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:40.425764 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:14:40.425769 | orchestrator | 2026-03-28 01:14:40.425774 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-28 01:14:40.425782 | orchestrator | Saturday 28 March 2026 01:06:09 +0000 (0:00:16.414) 0:03:08.307 ******** 2026-03-28 01:14:40.425837 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:40.425848 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:40.425949 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:14:40.425959 | orchestrator | 2026-03-28 01:14:40.425967 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-03-28 01:14:40.425976 | orchestrator | Saturday 28 March 2026 01:06:11 +0000 (0:00:02.090) 0:03:10.397 ******** 2026-03-28 01:14:40.425985 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:40.425993 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:40.426002 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:14:40.426008 | orchestrator | 2026-03-28 01:14:40.426013 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-03-28 01:14:40.426063 | orchestrator | Saturday 28 March 2026 01:06:27 +0000 (0:00:15.612) 0:03:26.009 ******** 2026-03-28 01:14:40.426069 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:40.426074 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:40.426083 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:40.426095 | orchestrator | 2026-03-28 01:14:40.426106 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-28 01:14:40.426115 | orchestrator | Saturday 28 March 2026 01:06:29 +0000 (0:00:02.372) 0:03:28.382 ******** 2026-03-28 01:14:40.426124 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:40.426142 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:40.426150 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:40.426158 | orchestrator | 2026-03-28 01:14:40.426166 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-03-28 01:14:40.426174 | orchestrator | 2026-03-28 01:14:40.426184 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-28 01:14:40.426193 | orchestrator | Saturday 28 March 2026 01:06:30 +0000 (0:00:00.853) 0:03:29.235 ******** 2026-03-28 01:14:40.426202 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:14:40.426233 | orchestrator | 2026-03-28 01:14:40.426243 | orchestrator | TASK [service-ks-register : nova | Creating/deleting services] ***************** 2026-03-28 01:14:40.426251 | orchestrator | Saturday 28 March 2026 01:06:31 +0000 (0:00:01.402) 0:03:30.637 ******** 2026-03-28 01:14:40.426256 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-03-28 01:14:40.426262 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-03-28 01:14:40.426267 | orchestrator | 2026-03-28 01:14:40.426272 | orchestrator | TASK [service-ks-register : nova | Creating/deleting endpoints] **************** 2026-03-28 01:14:40.426277 | orchestrator | Saturday 28 March 2026 01:06:35 +0000 (0:00:04.056) 0:03:34.694 ******** 2026-03-28 01:14:40.426282 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-03-28 01:14:40.426289 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-03-28 01:14:40.426295 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-03-28 01:14:40.426304 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-03-28 01:14:40.426311 | orchestrator | 2026-03-28 01:14:40.426316 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-03-28 01:14:40.426321 | orchestrator | Saturday 28 March 2026 01:06:43 +0000 (0:00:07.872) 0:03:42.567 ******** 2026-03-28 01:14:40.426326 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-28 01:14:40.426331 | orchestrator | 2026-03-28 01:14:40.426336 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-03-28 01:14:40.426341 | orchestrator | Saturday 28 March 2026 01:06:47 +0000 (0:00:03.676) 0:03:46.243 ******** 2026-03-28 01:14:40.426346 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-03-28 01:14:40.426352 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-28 01:14:40.426357 | orchestrator | 2026-03-28 01:14:40.426362 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-03-28 01:14:40.426373 | orchestrator | Saturday 28 March 2026 01:06:52 +0000 (0:00:04.616) 0:03:50.860 ******** 2026-03-28 01:14:40.426378 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-28 01:14:40.426383 | orchestrator | 2026-03-28 01:14:40.426388 | orchestrator | TASK [service-ks-register : nova | Granting/revoking user roles] *************** 2026-03-28 01:14:40.426393 | orchestrator | Saturday 28 March 2026 01:06:55 +0000 (0:00:03.728) 0:03:54.589 ******** 2026-03-28 01:14:40.426398 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-03-28 01:14:40.426429 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-03-28 01:14:40.426435 | orchestrator | 2026-03-28 01:14:40.426440 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-28 01:14:40.426455 | orchestrator | Saturday 28 March 2026 01:07:04 +0000 (0:00:08.937) 0:04:03.527 ******** 2026-03-28 01:14:40.426464 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:14:40.426488 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:14:40.426494 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:14:40.426510 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:14:40.426516 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:14:40.426529 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:14:40.426560 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:14:40.426568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:14:40.426580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:14:40.426586 | orchestrator | 2026-03-28 01:14:40.426595 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-03-28 01:14:40.426601 | orchestrator | Saturday 28 March 2026 01:07:07 +0000 (0:00:02.619) 0:04:06.147 ******** 2026-03-28 01:14:40.426615 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:40.426620 | orchestrator | 2026-03-28 01:14:40.426642 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-03-28 01:14:40.426647 | orchestrator | Saturday 28 March 2026 01:07:07 +0000 (0:00:00.178) 0:04:06.325 ******** 2026-03-28 01:14:40.426652 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:40.426657 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:40.426662 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:40.426667 | orchestrator | 2026-03-28 01:14:40.426672 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-03-28 01:14:40.426677 | orchestrator | Saturday 28 March 2026 01:07:07 +0000 (0:00:00.348) 0:04:06.674 ******** 2026-03-28 01:14:40.426682 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-28 01:14:40.426687 | orchestrator | 2026-03-28 01:14:40.426692 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-03-28 01:14:40.426697 | orchestrator | Saturday 28 March 2026 01:07:09 +0000 (0:00:01.421) 0:04:08.095 ******** 2026-03-28 01:14:40.426732 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:40.426738 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:40.426743 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:40.426748 | orchestrator | 2026-03-28 01:14:40.426753 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-28 01:14:40.426758 | orchestrator | Saturday 28 March 2026 01:07:09 +0000 (0:00:00.306) 0:04:08.402 ******** 2026-03-28 01:14:40.426763 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:14:40.426768 | orchestrator | 2026-03-28 01:14:40.426773 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-28 01:14:40.426777 | orchestrator | Saturday 28 March 2026 01:07:10 +0000 (0:00:00.851) 0:04:09.254 ******** 2026-03-28 01:14:40.426811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:14:40.426818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:14:40.426838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:14:40.426844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:14:40.426851 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:14:40.426860 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:14:40.426883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:14:40.426893 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:14:40.426902 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:14:40.426909 | orchestrator | 2026-03-28 01:14:40.426916 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-28 01:14:40.426925 | orchestrator | Saturday 28 March 2026 01:07:14 +0000 (0:00:03.760) 0:04:13.014 ******** 2026-03-28 01:14:40.426934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:14:40.426943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:14:40.426962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 01:14:40.426971 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:40.427018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:14:40.427025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:14:40.427032 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 01:14:40.427037 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:40.427047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:14:40.427110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:14:40.427117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 01:14:40.427122 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:40.427127 | orchestrator | 2026-03-28 01:14:40.427132 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-28 01:14:40.427137 | orchestrator | Saturday 28 March 2026 01:07:15 +0000 (0:00:00.903) 0:04:13.918 ******** 2026-03-28 01:14:40.427143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:14:40.427149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:14:40.427166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 01:14:40.427172 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:40.427178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:14:40.427189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:14:40.427195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 01:14:40.427207 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:40.427306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:14:40.427320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:14:40.427329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 01:14:40.427338 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:40.427372 | orchestrator | 2026-03-28 01:14:40.427382 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-03-28 01:14:40.427390 | orchestrator | Saturday 28 March 2026 01:07:16 +0000 (0:00:01.788) 0:04:15.707 ******** 2026-03-28 01:14:40.427399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:14:40.427417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:14:40.427434 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:14:40.427443 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:14:40.427452 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:14:40.427470 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:14:40.427486 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:14:40.427498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:14:40.427509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:14:40.427517 | orchestrator | 2026-03-28 01:14:40.427524 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-03-28 01:14:40.427532 | orchestrator | Saturday 28 March 2026 01:07:20 +0000 (0:00:03.997) 0:04:19.705 ******** 2026-03-28 01:14:40.427540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:14:40.427559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:14:40.427575 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:14:40.427584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:14:40.427598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:14:40.427610 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:14:40.427625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:14:40.427633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:14:40.427642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:14:40.427650 | orchestrator | 2026-03-28 01:14:40.427658 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-03-28 01:14:40.427668 | orchestrator | Saturday 28 March 2026 01:07:32 +0000 (0:00:11.291) 0:04:30.997 ******** 2026-03-28 01:14:40.427673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:14:40.427681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:14:40.427690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 01:14:40.427695 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:40.427700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:14:40.427709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:14:40.427715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:14:40.427726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:14:40.427732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 01:14:40.427737 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:40.427742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 01:14:40.427751 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:40.427756 | orchestrator | 2026-03-28 01:14:40.427760 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-03-28 01:14:40.427765 | orchestrator | Saturday 28 March 2026 01:07:33 +0000 (0:00:01.118) 0:04:32.115 ******** 2026-03-28 01:14:40.427770 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:40.427775 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:40.427779 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:40.427784 | orchestrator | 2026-03-28 01:14:40.427789 | orchestrator | TASK [nova : Copying over nova-metadata-wsgi.conf] ***************************** 2026-03-28 01:14:40.427794 | orchestrator | Saturday 28 March 2026 01:07:34 +0000 (0:00:01.060) 0:04:33.176 ******** 2026-03-28 01:14:40.427799 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:40.427803 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:40.427808 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:40.427813 | orchestrator | 2026-03-28 01:14:40.427817 | orchestrator | TASK [nova : Copying over vendordata file for nova services] ******************* 2026-03-28 01:14:40.427822 | orchestrator | Saturday 28 March 2026 01:07:35 +0000 (0:00:01.023) 0:04:34.200 ******** 2026-03-28 01:14:40.427827 | orchestrator | skipping: [testbed-node-0] => (item=nova-metadata)  2026-03-28 01:14:40.427832 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-03-28 01:14:40.427837 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:40.427842 | orchestrator | skipping: [testbed-node-1] => (item=nova-metadata)  2026-03-28 01:14:40.427846 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-03-28 01:14:40.427851 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:40.427856 | orchestrator | skipping: [testbed-node-2] => (item=nova-metadata)  2026-03-28 01:14:40.427860 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-03-28 01:14:40.427865 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:40.427870 | orchestrator | 2026-03-28 01:14:40.427875 | orchestrator | TASK [Configure uWSGI for Nova] ************************************************ 2026-03-28 01:14:40.427879 | orchestrator | Saturday 28 March 2026 01:07:35 +0000 (0:00:00.415) 0:04:34.615 ******** 2026-03-28 01:14:40.427884 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova-api', 'port': '8774', 'workers': '2'}) 2026-03-28 01:14:40.427891 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova-metadata', 'port': '8775', 'workers': '2'}) 2026-03-28 01:14:40.427896 | orchestrator | 2026-03-28 01:14:40.427900 | orchestrator | TASK [service-uwsgi-config : Copying over nova-api uWSGI config] *************** 2026-03-28 01:14:40.427905 | orchestrator | Saturday 28 March 2026 01:07:38 +0000 (0:00:02.558) 0:04:37.173 ******** 2026-03-28 01:14:40.427910 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:14:40.427914 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:14:40.427924 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:14:40.427929 | orchestrator | 2026-03-28 01:14:40.427934 | orchestrator | TASK [service-uwsgi-config : Copying over nova-metadata uWSGI config] ********** 2026-03-28 01:14:40.427938 | orchestrator | Saturday 28 March 2026 01:07:42 +0000 (0:00:04.382) 0:04:41.556 ******** 2026-03-28 01:14:40.427943 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:14:40.427948 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:14:40.427953 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:14:40.427957 | orchestrator | 2026-03-28 01:14:40.427962 | orchestrator | TASK [service-check-containers : nova | Check containers] ********************** 2026-03-28 01:14:40.427967 | orchestrator | Saturday 28 March 2026 01:07:47 +0000 (0:00:04.800) 0:04:46.357 ******** 2026-03-28 01:14:40.427981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:14:40.427991 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:14:40.427997 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:14:40.428009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:14:40.428019 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:14:40.428025 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-28 01:14:40.428030 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:14:40.428035 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:14:40.428043 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-28 01:14:40.428053 | orchestrator | 2026-03-28 01:14:40.428059 | orchestrator | TASK [service-check-containers : nova | Notify handlers to restart containers] *** 2026-03-28 01:14:40.428066 | orchestrator | Saturday 28 March 2026 01:07:52 +0000 (0:00:04.515) 0:04:50.872 ******** 2026-03-28 01:14:40.428072 | orchestrator | changed: [testbed-node-0] => { 2026-03-28 01:14:40.428077 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 01:14:40.428082 | orchestrator | } 2026-03-28 01:14:40.428087 | orchestrator | changed: [testbed-node-1] => { 2026-03-28 01:14:40.428093 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 01:14:40.428098 | orchestrator | } 2026-03-28 01:14:40.428103 | orchestrator | changed: [testbed-node-2] => { 2026-03-28 01:14:40.428107 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 01:14:40.428112 | orchestrator | } 2026-03-28 01:14:40.428117 | orchestrator | 2026-03-28 01:14:40.428122 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-28 01:14:40.428127 | orchestrator | Saturday 28 March 2026 01:07:52 +0000 (0:00:00.475) 0:04:51.348 ******** 2026-03-28 01:14:40.428132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:14:40.428138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:14:40.428143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 01:14:40.428148 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:40.428164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:14:40.428170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:14:40.428176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 01:14:40.428181 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:40.428186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:14:40.428194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-28 01:14:40.428208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-28 01:14:40.428233 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:40.428242 | orchestrator | 2026-03-28 01:14:40.428251 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-28 01:14:40.428258 | orchestrator | Saturday 28 March 2026 01:07:54 +0000 (0:00:01.890) 0:04:53.238 ******** 2026-03-28 01:14:40.428266 | orchestrator | 2026-03-28 01:14:40.428274 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-28 01:14:40.428281 | orchestrator | Saturday 28 March 2026 01:07:54 +0000 (0:00:00.152) 0:04:53.390 ******** 2026-03-28 01:14:40.428286 | orchestrator | 2026-03-28 01:14:40.428290 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-28 01:14:40.428295 | orchestrator | Saturday 28 March 2026 01:07:54 +0000 (0:00:00.164) 0:04:53.555 ******** 2026-03-28 01:14:40.428300 | orchestrator | 2026-03-28 01:14:40.428305 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-03-28 01:14:40.428310 | orchestrator | Saturday 28 March 2026 01:07:54 +0000 (0:00:00.170) 0:04:53.725 ******** 2026-03-28 01:14:40.428314 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:14:40.428320 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:14:40.428327 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:14:40.428334 | orchestrator | 2026-03-28 01:14:40.428339 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-03-28 01:14:40.428344 | orchestrator | Saturday 28 March 2026 01:08:16 +0000 (0:00:21.847) 0:05:15.572 ******** 2026-03-28 01:14:40.428350 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:14:40.428358 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:14:40.428363 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:14:40.428368 | orchestrator | 2026-03-28 01:14:40.428373 | orchestrator | RUNNING HANDLER [nova : Restart nova-metadata container] *********************** 2026-03-28 01:14:40.428378 | orchestrator | Saturday 28 March 2026 01:08:30 +0000 (0:00:13.397) 0:05:28.969 ******** 2026-03-28 01:14:40.428382 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:14:40.428387 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:14:40.428392 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:14:40.428396 | orchestrator | 2026-03-28 01:14:40.428401 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-03-28 01:14:40.428406 | orchestrator | 2026-03-28 01:14:40.428410 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-28 01:14:40.428415 | orchestrator | Saturday 28 March 2026 01:08:39 +0000 (0:00:09.691) 0:05:38.661 ******** 2026-03-28 01:14:40.428420 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:14:40.428433 | orchestrator | 2026-03-28 01:14:40.428438 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-28 01:14:40.428442 | orchestrator | Saturday 28 March 2026 01:08:41 +0000 (0:00:01.357) 0:05:40.019 ******** 2026-03-28 01:14:40.428447 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:14:40.428452 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:14:40.428456 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:14:40.428461 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:40.428466 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:40.428475 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:40.428486 | orchestrator | 2026-03-28 01:14:40.428496 | orchestrator | TASK [nova-cell : Get new Libvirt version] ************************************* 2026-03-28 01:14:40.428504 | orchestrator | Saturday 28 March 2026 01:08:41 +0000 (0:00:00.593) 0:05:40.613 ******** 2026-03-28 01:14:40.428533 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:14:40.428540 | orchestrator | 2026-03-28 01:14:40.428547 | orchestrator | TASK [nova-cell : Cache new Libvirt version] *********************************** 2026-03-28 01:14:40.428555 | orchestrator | Saturday 28 March 2026 01:09:08 +0000 (0:00:26.329) 0:06:06.942 ******** 2026-03-28 01:14:40.428562 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:14:40.428570 | orchestrator | 2026-03-28 01:14:40.428578 | orchestrator | TASK [Get nova_libvirt image info] ********************************************* 2026-03-28 01:14:40.428586 | orchestrator | Saturday 28 March 2026 01:09:09 +0000 (0:00:01.196) 0:06:08.138 ******** 2026-03-28 01:14:40.428594 | orchestrator | included: service-image-info for testbed-node-3 2026-03-28 01:14:40.428603 | orchestrator | 2026-03-28 01:14:40.428608 | orchestrator | TASK [service-image-info : community.docker.docker_image_info] ***************** 2026-03-28 01:14:40.428613 | orchestrator | Saturday 28 March 2026 01:09:10 +0000 (0:00:00.686) 0:06:08.825 ******** 2026-03-28 01:14:40.428618 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:14:40.428622 | orchestrator | 2026-03-28 01:14:40.428627 | orchestrator | TASK [service-image-info : set_fact] ******************************************* 2026-03-28 01:14:40.428632 | orchestrator | Saturday 28 March 2026 01:09:13 +0000 (0:00:03.580) 0:06:12.406 ******** 2026-03-28 01:14:40.428637 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:14:40.428642 | orchestrator | 2026-03-28 01:14:40.428652 | orchestrator | TASK [service-image-info : containers.podman.podman_image_info] **************** 2026-03-28 01:14:40.428657 | orchestrator | Saturday 28 March 2026 01:09:15 +0000 (0:00:02.162) 0:06:14.569 ******** 2026-03-28 01:14:40.428661 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:14:40.428666 | orchestrator | 2026-03-28 01:14:40.428671 | orchestrator | TASK [service-image-info : set_fact] ******************************************* 2026-03-28 01:14:40.428676 | orchestrator | Saturday 28 March 2026 01:09:17 +0000 (0:00:02.077) 0:06:16.646 ******** 2026-03-28 01:14:40.428681 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:14:40.428686 | orchestrator | 2026-03-28 01:14:40.428690 | orchestrator | TASK [nova-cell : Get container facts] ***************************************** 2026-03-28 01:14:40.428700 | orchestrator | Saturday 28 March 2026 01:09:20 +0000 (0:00:02.144) 0:06:18.791 ******** 2026-03-28 01:14:40.428705 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:40.428710 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:40.428715 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:40.428719 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:14:40.428724 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:14:40.428729 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:14:40.428734 | orchestrator | 2026-03-28 01:14:40.428739 | orchestrator | TASK [nova-cell : Get current Libvirt version] ********************************* 2026-03-28 01:14:40.428743 | orchestrator | Saturday 28 March 2026 01:09:27 +0000 (0:00:07.375) 0:06:26.167 ******** 2026-03-28 01:14:40.428748 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:14:40.428753 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:40.428757 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:14:40.428768 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:40.428773 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:40.428778 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:14:40.428783 | orchestrator | 2026-03-28 01:14:40.428788 | orchestrator | TASK [nova-cell : Check that the new Libvirt version is >= current] ************ 2026-03-28 01:14:40.428792 | orchestrator | Saturday 28 March 2026 01:09:29 +0000 (0:00:02.580) 0:06:28.747 ******** 2026-03-28 01:14:40.428797 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:14:40.428802 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:40.428807 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:14:40.428811 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:14:40.428816 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:40.428821 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:40.428825 | orchestrator | 2026-03-28 01:14:40.428830 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-03-28 01:14:40.428835 | orchestrator | Saturday 28 March 2026 01:09:32 +0000 (0:00:02.199) 0:06:30.947 ******** 2026-03-28 01:14:40.428839 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:40.428844 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:40.428849 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:40.428864 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 01:14:40.428869 | orchestrator | 2026-03-28 01:14:40.428873 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-28 01:14:40.428878 | orchestrator | Saturday 28 March 2026 01:09:33 +0000 (0:00:01.041) 0:06:31.988 ******** 2026-03-28 01:14:40.428883 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-03-28 01:14:40.428888 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-03-28 01:14:40.428893 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-03-28 01:14:40.428897 | orchestrator | 2026-03-28 01:14:40.428902 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-28 01:14:40.428907 | orchestrator | Saturday 28 March 2026 01:09:34 +0000 (0:00:00.978) 0:06:32.967 ******** 2026-03-28 01:14:40.428911 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-03-28 01:14:40.428916 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-03-28 01:14:40.428921 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-03-28 01:14:40.428926 | orchestrator | 2026-03-28 01:14:40.428931 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-28 01:14:40.428935 | orchestrator | Saturday 28 March 2026 01:09:35 +0000 (0:00:01.216) 0:06:34.183 ******** 2026-03-28 01:14:40.428940 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-03-28 01:14:40.428945 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:14:40.428949 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-03-28 01:14:40.428954 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:14:40.428959 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-03-28 01:14:40.428963 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:14:40.428968 | orchestrator | 2026-03-28 01:14:40.428973 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-03-28 01:14:40.428978 | orchestrator | Saturday 28 March 2026 01:09:35 +0000 (0:00:00.584) 0:06:34.767 ******** 2026-03-28 01:14:40.428983 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-28 01:14:40.428987 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-28 01:14:40.428995 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:40.429005 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-28 01:14:40.429017 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-28 01:14:40.429024 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-28 01:14:40.429031 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:40.429045 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-28 01:14:40.429053 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-28 01:14:40.429060 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-28 01:14:40.429067 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:40.429074 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-28 01:14:40.429086 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-28 01:14:40.429093 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-28 01:14:40.429101 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-28 01:14:40.429108 | orchestrator | 2026-03-28 01:14:40.429116 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-03-28 01:14:40.429125 | orchestrator | Saturday 28 March 2026 01:09:37 +0000 (0:00:01.585) 0:06:36.353 ******** 2026-03-28 01:14:40.429132 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:40.429141 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:40.429147 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:40.429532 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:14:40.429555 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:14:40.429560 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:14:40.429564 | orchestrator | 2026-03-28 01:14:40.429569 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-03-28 01:14:40.429575 | orchestrator | Saturday 28 March 2026 01:09:38 +0000 (0:00:01.399) 0:06:37.753 ******** 2026-03-28 01:14:40.429579 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:40.429584 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:40.429589 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:40.429594 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:14:40.429598 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:14:40.429603 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:14:40.429608 | orchestrator | 2026-03-28 01:14:40.429613 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-28 01:14:40.429617 | orchestrator | Saturday 28 March 2026 01:09:40 +0000 (0:00:01.911) 0:06:39.665 ******** 2026-03-28 01:14:40.429624 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-28 01:14:40.429630 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-28 01:14:40.429648 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-28 01:14:40.429659 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-28 01:14:40.429674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-28 01:14:40.429680 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-28 01:14:40.429685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-28 01:14:40.429690 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-28 01:14:40.429741 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-28 01:14:40.429746 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:14:40.429759 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-28 01:14:40.429765 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-28 01:14:40.429770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:14:40.429775 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:14:40.429784 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-28 01:14:40.429789 | orchestrator | 2026-03-28 01:14:40.429797 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-28 01:14:40.429805 | orchestrator | Saturday 28 March 2026 01:09:43 +0000 (0:00:02.238) 0:06:41.903 ******** 2026-03-28 01:14:40.429816 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:14:40.429826 | orchestrator | 2026-03-28 01:14:40.429834 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-28 01:14:40.429841 | orchestrator | Saturday 28 March 2026 01:09:44 +0000 (0:00:01.372) 0:06:43.276 ******** 2026-03-28 01:14:40.429858 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-28 01:14:40.429865 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-28 01:14:40.429871 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-28 01:14:40.429884 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-28 01:14:40.429892 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-28 01:14:40.429900 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-28 01:14:40.429916 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-28 01:14:40.429925 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-28 01:14:40.429933 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-28 01:14:40.429938 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:14:40.429947 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:14:40.429952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:14:40.429959 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-28 01:14:40.429968 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-28 01:14:40.429973 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-28 01:14:40.429981 | orchestrator | 2026-03-28 01:14:40.429986 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-28 01:14:40.429990 | orchestrator | Saturday 28 March 2026 01:09:49 +0000 (0:00:04.573) 0:06:47.849 ******** 2026-03-28 01:14:40.429995 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-28 01:14:40.430000 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-28 01:14:40.430007 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-28 01:14:40.430012 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:14:40.430065 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-28 01:14:40.430071 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-28 01:14:40.430080 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-28 01:14:40.430085 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:14:40.430089 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-28 01:14:40.430094 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-28 01:14:40.430101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-28 01:14:40.430109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-28 01:14:40.430113 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-28 01:14:40.430122 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:14:40.430126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-28 01:14:40.430130 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:40.430135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-28 01:14:40.430139 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:40.430144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-28 01:14:40.430153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-28 01:14:40.430158 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:40.430163 | orchestrator | 2026-03-28 01:14:40.430168 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-28 01:14:40.430173 | orchestrator | Saturday 28 March 2026 01:09:51 +0000 (0:00:02.609) 0:06:50.458 ******** 2026-03-28 01:14:40.430182 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-28 01:14:40.430192 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-28 01:14:40.430197 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-28 01:14:40.430202 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-28 01:14:40.430210 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-28 01:14:40.430234 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:14:40.430244 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-28 01:14:40.430254 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-28 01:14:40.430259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-28 01:14:40.430264 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-28 01:14:40.430269 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:14:40.430274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-28 01:14:40.430279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-28 01:14:40.430284 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:40.430300 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-28 01:14:40.430309 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:14:40.430314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-28 01:14:40.430319 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:40.430324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-28 01:14:40.430332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-28 01:14:40.430338 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:40.430343 | orchestrator | 2026-03-28 01:14:40.430347 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-28 01:14:40.430353 | orchestrator | Saturday 28 March 2026 01:09:54 +0000 (0:00:03.043) 0:06:53.501 ******** 2026-03-28 01:14:40.430357 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:40.430362 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:40.430367 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:40.430372 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 01:14:40.430377 | orchestrator | 2026-03-28 01:14:40.430382 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-03-28 01:14:40.430386 | orchestrator | Saturday 28 March 2026 01:09:55 +0000 (0:00:01.000) 0:06:54.502 ******** 2026-03-28 01:14:40.430391 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-28 01:14:40.430396 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-28 01:14:40.430401 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-28 01:14:40.430406 | orchestrator | 2026-03-28 01:14:40.430411 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-03-28 01:14:40.430415 | orchestrator | Saturday 28 March 2026 01:09:57 +0000 (0:00:01.529) 0:06:56.032 ******** 2026-03-28 01:14:40.430420 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-28 01:14:40.430425 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-28 01:14:40.430430 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-28 01:14:40.430435 | orchestrator | 2026-03-28 01:14:40.430440 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-03-28 01:14:40.430444 | orchestrator | Saturday 28 March 2026 01:09:58 +0000 (0:00:01.496) 0:06:57.528 ******** 2026-03-28 01:14:40.430453 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:14:40.430459 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:14:40.430463 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:14:40.430468 | orchestrator | 2026-03-28 01:14:40.430474 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-03-28 01:14:40.430479 | orchestrator | Saturday 28 March 2026 01:09:59 +0000 (0:00:00.483) 0:06:58.012 ******** 2026-03-28 01:14:40.430487 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:14:40.430492 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:14:40.430496 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:14:40.430501 | orchestrator | 2026-03-28 01:14:40.430506 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-03-28 01:14:40.430512 | orchestrator | Saturday 28 March 2026 01:09:59 +0000 (0:00:00.524) 0:06:58.539 ******** 2026-03-28 01:14:40.430516 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-28 01:14:40.430522 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-28 01:14:40.430527 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-28 01:14:40.430531 | orchestrator | 2026-03-28 01:14:40.430538 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-03-28 01:14:40.430543 | orchestrator | Saturday 28 March 2026 01:10:01 +0000 (0:00:01.345) 0:06:59.885 ******** 2026-03-28 01:14:40.430547 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-28 01:14:40.430551 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-28 01:14:40.430556 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-28 01:14:40.430560 | orchestrator | 2026-03-28 01:14:40.430564 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-03-28 01:14:40.430569 | orchestrator | Saturday 28 March 2026 01:10:02 +0000 (0:00:01.274) 0:07:01.160 ******** 2026-03-28 01:14:40.430573 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-28 01:14:40.430577 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-28 01:14:40.430581 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-28 01:14:40.430586 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-03-28 01:14:40.430590 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-03-28 01:14:40.430594 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-03-28 01:14:40.430598 | orchestrator | 2026-03-28 01:14:40.430603 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-03-28 01:14:40.430607 | orchestrator | Saturday 28 March 2026 01:10:07 +0000 (0:00:05.390) 0:07:06.550 ******** 2026-03-28 01:14:40.430611 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:14:40.430615 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:14:40.430619 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:14:40.430624 | orchestrator | 2026-03-28 01:14:40.430628 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-03-28 01:14:40.430632 | orchestrator | Saturday 28 March 2026 01:10:08 +0000 (0:00:00.345) 0:07:06.896 ******** 2026-03-28 01:14:40.430636 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:14:40.430641 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:14:40.430645 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:14:40.430649 | orchestrator | 2026-03-28 01:14:40.430653 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-03-28 01:14:40.430658 | orchestrator | Saturday 28 March 2026 01:10:08 +0000 (0:00:00.360) 0:07:07.257 ******** 2026-03-28 01:14:40.430662 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:14:40.430666 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:14:40.430670 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:14:40.430675 | orchestrator | 2026-03-28 01:14:40.430679 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-03-28 01:14:40.430683 | orchestrator | Saturday 28 March 2026 01:10:10 +0000 (0:00:01.704) 0:07:08.961 ******** 2026-03-28 01:14:40.430691 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'ceph-ephemeral-nova', 'desc': 'Ceph Client Secret for Ephemeral Storage (Nova)', 'enabled': True}) 2026-03-28 01:14:40.430696 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'ceph-ephemeral-nova', 'desc': 'Ceph Client Secret for Ephemeral Storage (Nova)', 'enabled': True}) 2026-03-28 01:14:40.430701 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'ceph-ephemeral-nova', 'desc': 'Ceph Client Secret for Ephemeral Storage (Nova)', 'enabled': True}) 2026-03-28 01:14:40.430705 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'ceph-persistent-cinder', 'desc': 'Ceph Client Secret for Persistent Storage (Cinder)', 'enabled': 'yes'}) 2026-03-28 01:14:40.430710 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'ceph-persistent-cinder', 'desc': 'Ceph Client Secret for Persistent Storage (Cinder)', 'enabled': 'yes'}) 2026-03-28 01:14:40.430715 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'ceph-persistent-cinder', 'desc': 'Ceph Client Secret for Persistent Storage (Cinder)', 'enabled': 'yes'}) 2026-03-28 01:14:40.430719 | orchestrator | 2026-03-28 01:14:40.430723 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-03-28 01:14:40.430727 | orchestrator | Saturday 28 March 2026 01:10:14 +0000 (0:00:04.292) 0:07:13.253 ******** 2026-03-28 01:14:40.430732 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-28 01:14:40.430736 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-28 01:14:40.430740 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-28 01:14:40.430744 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-28 01:14:40.430749 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:14:40.430753 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-28 01:14:40.430757 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:14:40.430761 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-28 01:14:40.430768 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:14:40.430772 | orchestrator | 2026-03-28 01:14:40.430777 | orchestrator | TASK [nova-cell : Include tasks from qemu_wrapper.yml] ************************* 2026-03-28 01:14:40.430781 | orchestrator | Saturday 28 March 2026 01:10:18 +0000 (0:00:03.763) 0:07:17.017 ******** 2026-03-28 01:14:40.430785 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:40.430789 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:40.430794 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:40.430798 | orchestrator | included: /ansible/roles/nova-cell/tasks/qemu_wrapper.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-28 01:14:40.430802 | orchestrator | 2026-03-28 01:14:40.430809 | orchestrator | TASK [nova-cell : Check qemu wrapper file] ************************************* 2026-03-28 01:14:40.430813 | orchestrator | Saturday 28 March 2026 01:10:21 +0000 (0:00:03.222) 0:07:20.239 ******** 2026-03-28 01:14:40.430818 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-28 01:14:40.430822 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-28 01:14:40.430826 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-28 01:14:40.430830 | orchestrator | 2026-03-28 01:14:40.430834 | orchestrator | TASK [nova-cell : Copy qemu wrapper] ******************************************* 2026-03-28 01:14:40.430839 | orchestrator | Saturday 28 March 2026 01:10:22 +0000 (0:00:01.101) 0:07:21.340 ******** 2026-03-28 01:14:40.430843 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:14:40.430847 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:14:40.430852 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:14:40.430856 | orchestrator | 2026-03-28 01:14:40.430860 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-03-28 01:14:40.430865 | orchestrator | Saturday 28 March 2026 01:10:22 +0000 (0:00:00.309) 0:07:21.649 ******** 2026-03-28 01:14:40.430872 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:14:40.430876 | orchestrator | 2026-03-28 01:14:40.430881 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-03-28 01:14:40.430885 | orchestrator | Saturday 28 March 2026 01:10:23 +0000 (0:00:00.139) 0:07:21.789 ******** 2026-03-28 01:14:40.430890 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:14:40.430894 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:14:40.430898 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:14:40.430903 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:40.430907 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:40.430911 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:40.430915 | orchestrator | 2026-03-28 01:14:40.430920 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-03-28 01:14:40.430924 | orchestrator | Saturday 28 March 2026 01:10:23 +0000 (0:00:00.942) 0:07:22.732 ******** 2026-03-28 01:14:40.430928 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-28 01:14:40.430933 | orchestrator | 2026-03-28 01:14:40.430937 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-03-28 01:14:40.430941 | orchestrator | Saturday 28 March 2026 01:10:24 +0000 (0:00:00.925) 0:07:23.658 ******** 2026-03-28 01:14:40.430945 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:14:40.430950 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:14:40.430954 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:14:40.430958 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:40.430962 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:40.430966 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:40.430971 | orchestrator | 2026-03-28 01:14:40.430975 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-03-28 01:14:40.430979 | orchestrator | Saturday 28 March 2026 01:10:25 +0000 (0:00:00.747) 0:07:24.406 ******** 2026-03-28 01:14:40.430984 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-28 01:14:40.430989 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-28 01:14:40.431000 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-28 01:14:40.431009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-28 01:14:40.431013 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-28 01:14:40.431018 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-28 01:14:40.431022 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-28 01:14:40.431027 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-28 01:14:40.431033 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-28 01:14:40.431044 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:14:40.431049 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:14:40.431054 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:14:40.431058 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-28 01:14:40.431063 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-28 01:14:40.431072 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-28 01:14:40.431079 | orchestrator | 2026-03-28 01:14:40.431084 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-03-28 01:14:40.431091 | orchestrator | Saturday 28 March 2026 01:10:31 +0000 (0:00:05.484) 0:07:29.890 ******** 2026-03-28 01:14:40.431095 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-28 01:14:40.431100 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-28 01:14:40.431104 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-28 01:14:40.431109 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-28 01:14:40.431116 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-28 01:14:40.431128 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-28 01:14:40.431133 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-28 01:14:40.431137 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-28 01:14:40.431142 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-28 01:14:40.431146 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-28 01:14:40.431156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-28 01:14:40.431165 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-28 01:14:40.431169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:14:40.431174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:14:40.431178 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:14:40.431183 | orchestrator | 2026-03-28 01:14:40.431187 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-03-28 01:14:40.431191 | orchestrator | Saturday 28 March 2026 01:10:38 +0000 (0:00:07.779) 0:07:37.670 ******** 2026-03-28 01:14:40.431196 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:14:40.431200 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:14:40.431204 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:14:40.431208 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:40.431213 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:40.431230 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:40.431234 | orchestrator | 2026-03-28 01:14:40.431239 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-03-28 01:14:40.431243 | orchestrator | Saturday 28 March 2026 01:10:41 +0000 (0:00:02.364) 0:07:40.035 ******** 2026-03-28 01:14:40.431251 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-28 01:14:40.431255 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-28 01:14:40.431260 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-28 01:14:40.431264 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-28 01:14:40.431268 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-28 01:14:40.431272 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-28 01:14:40.431277 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-28 01:14:40.431281 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:40.431285 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-28 01:14:40.431290 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:40.431296 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-28 01:14:40.431301 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:40.431305 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-28 01:14:40.431309 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-28 01:14:40.431314 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-28 01:14:40.431318 | orchestrator | 2026-03-28 01:14:40.431322 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-03-28 01:14:40.431330 | orchestrator | Saturday 28 March 2026 01:10:47 +0000 (0:00:05.770) 0:07:45.805 ******** 2026-03-28 01:14:40.431334 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:14:40.431339 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:14:40.431343 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:14:40.431347 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:40.431351 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:40.431356 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:40.431360 | orchestrator | 2026-03-28 01:14:40.431364 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-03-28 01:14:40.431369 | orchestrator | Saturday 28 March 2026 01:10:48 +0000 (0:00:00.985) 0:07:46.791 ******** 2026-03-28 01:14:40.431373 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-28 01:14:40.431378 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-28 01:14:40.431382 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-28 01:14:40.431386 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-28 01:14:40.431391 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-28 01:14:40.431395 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-28 01:14:40.431399 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-28 01:14:40.431403 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-28 01:14:40.431408 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-28 01:14:40.431412 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-28 01:14:40.431420 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:40.431424 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-28 01:14:40.431428 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:40.431433 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-28 01:14:40.431437 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-28 01:14:40.431441 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:40.431445 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-28 01:14:40.431450 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-28 01:14:40.431454 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-28 01:14:40.431458 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-28 01:14:40.431463 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-28 01:14:40.431467 | orchestrator | 2026-03-28 01:14:40.431471 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-03-28 01:14:40.431475 | orchestrator | Saturday 28 March 2026 01:10:54 +0000 (0:00:06.909) 0:07:53.700 ******** 2026-03-28 01:14:40.431480 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-28 01:14:40.431484 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-28 01:14:40.431488 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-28 01:14:40.431493 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-28 01:14:40.431497 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-28 01:14:40.431501 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-28 01:14:40.431505 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-28 01:14:40.431512 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-28 01:14:40.431516 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-28 01:14:40.431520 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-28 01:14:40.431527 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-28 01:14:40.431534 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-28 01:14:40.431545 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-28 01:14:40.431553 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:40.431563 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-28 01:14:40.431570 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-28 01:14:40.431576 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:40.431583 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-28 01:14:40.431590 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-28 01:14:40.431596 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:40.431602 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-28 01:14:40.431609 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-28 01:14:40.431621 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-28 01:14:40.431628 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-28 01:14:40.431634 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-28 01:14:40.431641 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-28 01:14:40.431647 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-28 01:14:40.431653 | orchestrator | 2026-03-28 01:14:40.431661 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-03-28 01:14:40.431668 | orchestrator | Saturday 28 March 2026 01:11:04 +0000 (0:00:09.450) 0:08:03.151 ******** 2026-03-28 01:14:40.431674 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:14:40.431681 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:14:40.431687 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:14:40.431694 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:40.431700 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:40.431706 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:40.431712 | orchestrator | 2026-03-28 01:14:40.431719 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-03-28 01:14:40.431726 | orchestrator | Saturday 28 March 2026 01:11:04 +0000 (0:00:00.603) 0:08:03.754 ******** 2026-03-28 01:14:40.431733 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:14:40.431739 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:14:40.431746 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:14:40.431752 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:40.431760 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:40.431766 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:40.431773 | orchestrator | 2026-03-28 01:14:40.431780 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-03-28 01:14:40.431786 | orchestrator | Saturday 28 March 2026 01:11:05 +0000 (0:00:00.827) 0:08:04.582 ******** 2026-03-28 01:14:40.431793 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:40.431799 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:40.431806 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:40.431813 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:14:40.431820 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:14:40.431826 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:14:40.431833 | orchestrator | 2026-03-28 01:14:40.431839 | orchestrator | TASK [nova-cell : Generating 'hostid' file for nova_compute] ******************* 2026-03-28 01:14:40.431846 | orchestrator | Saturday 28 March 2026 01:11:08 +0000 (0:00:02.238) 0:08:06.820 ******** 2026-03-28 01:14:40.431853 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:40.431860 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:14:40.431866 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:40.431873 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:40.431880 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:14:40.431887 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:14:40.431894 | orchestrator | 2026-03-28 01:14:40.431901 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-03-28 01:14:40.431907 | orchestrator | Saturday 28 March 2026 01:11:10 +0000 (0:00:02.250) 0:08:09.071 ******** 2026-03-28 01:14:40.431922 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-28 01:14:40.431944 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-28 01:14:40.431952 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-28 01:14:40.431960 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:14:40.431967 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-28 01:14:40.431975 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-28 01:14:40.431982 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-28 01:14:40.431994 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:14:40.432010 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-28 01:14:40.432018 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-28 01:14:40.432025 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-28 01:14:40.432032 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:14:40.432040 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-28 01:14:40.432048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-28 01:14:40.432055 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:40.432062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-28 01:14:40.432080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-28 01:14:40.432088 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:40.432100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-28 01:14:40.432108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-28 01:14:40.432115 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:40.432122 | orchestrator | 2026-03-28 01:14:40.432129 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-03-28 01:14:40.432135 | orchestrator | Saturday 28 March 2026 01:11:11 +0000 (0:00:01.619) 0:08:10.690 ******** 2026-03-28 01:14:40.432143 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-28 01:14:40.432150 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-28 01:14:40.432157 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:14:40.432164 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-28 01:14:40.432171 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-28 01:14:40.432178 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:14:40.432185 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-28 01:14:40.432192 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-28 01:14:40.432199 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:14:40.432206 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-28 01:14:40.432212 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-28 01:14:40.432269 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:40.432277 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-28 01:14:40.432284 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-28 01:14:40.432291 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:40.432298 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-28 01:14:40.432305 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-28 01:14:40.432319 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:40.432326 | orchestrator | 2026-03-28 01:14:40.432333 | orchestrator | TASK [service-check-containers : nova_cell | Check containers] ***************** 2026-03-28 01:14:40.432340 | orchestrator | Saturday 28 March 2026 01:11:12 +0000 (0:00:00.964) 0:08:11.654 ******** 2026-03-28 01:14:40.432348 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-28 01:14:40.432365 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-28 01:14:40.432373 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-28 01:14:40.432381 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-28 01:14:40.432388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-28 01:14:40.432400 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-28 01:14:40.432408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-28 01:14:40.432419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-28 01:14:40.432431 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-28 01:14:40.432439 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-28 01:14:40.432446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:14:40.432458 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-28 01:14:40.432466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:14:40.432476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-28 01:14:40.432488 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-28 01:14:40.432495 | orchestrator | 2026-03-28 01:14:40.432502 | orchestrator | TASK [service-check-containers : nova_cell | Notify handlers to restart containers] *** 2026-03-28 01:14:40.432509 | orchestrator | Saturday 28 March 2026 01:11:16 +0000 (0:00:04.106) 0:08:15.761 ******** 2026-03-28 01:14:40.432516 | orchestrator | changed: [testbed-node-3] => { 2026-03-28 01:14:40.432523 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 01:14:40.432530 | orchestrator | } 2026-03-28 01:14:40.432538 | orchestrator | changed: [testbed-node-4] => { 2026-03-28 01:14:40.432544 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 01:14:40.432551 | orchestrator | } 2026-03-28 01:14:40.432558 | orchestrator | changed: [testbed-node-5] => { 2026-03-28 01:14:40.432564 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 01:14:40.432570 | orchestrator | } 2026-03-28 01:14:40.432577 | orchestrator | changed: [testbed-node-0] => { 2026-03-28 01:14:40.432583 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 01:14:40.432589 | orchestrator | } 2026-03-28 01:14:40.432596 | orchestrator | changed: [testbed-node-1] => { 2026-03-28 01:14:40.432602 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 01:14:40.432609 | orchestrator | } 2026-03-28 01:14:40.432622 | orchestrator | changed: [testbed-node-2] => { 2026-03-28 01:14:40.432629 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 01:14:40.432637 | orchestrator | } 2026-03-28 01:14:40.432644 | orchestrator | 2026-03-28 01:14:40.432651 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-28 01:14:40.432658 | orchestrator | Saturday 28 March 2026 01:11:18 +0000 (0:00:01.109) 0:08:16.870 ******** 2026-03-28 01:14:40.432666 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-28 01:14:40.432674 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-28 01:14:40.432681 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-28 01:14:40.432693 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:14:40.432764 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-28 01:14:40.432774 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-28 01:14:40.432787 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-28 01:14:40.432794 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:14:40.432800 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-28 01:14:40.432807 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-28 01:14:40.432817 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-28 01:14:40.432823 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:14:40.432834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-28 01:14:40.432845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-28 01:14:40.432853 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:40.432859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-28 01:14:40.432865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-28 01:14:40.432872 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:40.432878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-28 01:14:40.432888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-28 01:14:40.432895 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:40.432902 | orchestrator | 2026-03-28 01:14:40.432908 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-28 01:14:40.432915 | orchestrator | Saturday 28 March 2026 01:11:20 +0000 (0:00:02.542) 0:08:19.413 ******** 2026-03-28 01:14:40.432921 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:14:40.432927 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:14:40.432934 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:14:40.432940 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:40.432950 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:40.432957 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:40.432963 | orchestrator | 2026-03-28 01:14:40.432969 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-28 01:14:40.432981 | orchestrator | Saturday 28 March 2026 01:11:21 +0000 (0:00:00.637) 0:08:20.051 ******** 2026-03-28 01:14:40.432987 | orchestrator | 2026-03-28 01:14:40.432994 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-28 01:14:40.433000 | orchestrator | Saturday 28 March 2026 01:11:21 +0000 (0:00:00.168) 0:08:20.219 ******** 2026-03-28 01:14:40.433007 | orchestrator | 2026-03-28 01:14:40.433013 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-28 01:14:40.433019 | orchestrator | Saturday 28 March 2026 01:11:21 +0000 (0:00:00.134) 0:08:20.353 ******** 2026-03-28 01:14:40.433025 | orchestrator | 2026-03-28 01:14:40.433031 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-28 01:14:40.433038 | orchestrator | Saturday 28 March 2026 01:11:21 +0000 (0:00:00.312) 0:08:20.666 ******** 2026-03-28 01:14:40.433044 | orchestrator | 2026-03-28 01:14:40.433050 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-28 01:14:40.433056 | orchestrator | Saturday 28 March 2026 01:11:22 +0000 (0:00:00.135) 0:08:20.802 ******** 2026-03-28 01:14:40.433063 | orchestrator | 2026-03-28 01:14:40.433069 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-28 01:14:40.433075 | orchestrator | Saturday 28 March 2026 01:11:22 +0000 (0:00:00.158) 0:08:20.960 ******** 2026-03-28 01:14:40.433081 | orchestrator | 2026-03-28 01:14:40.433087 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-03-28 01:14:40.433093 | orchestrator | Saturday 28 March 2026 01:11:22 +0000 (0:00:00.137) 0:08:21.098 ******** 2026-03-28 01:14:40.433100 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:14:40.433106 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:14:40.433112 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:14:40.433119 | orchestrator | 2026-03-28 01:14:40.433125 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-03-28 01:14:40.433131 | orchestrator | Saturday 28 March 2026 01:11:30 +0000 (0:00:08.519) 0:08:29.617 ******** 2026-03-28 01:14:40.433137 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:14:40.433143 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:14:40.433149 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:14:40.433156 | orchestrator | 2026-03-28 01:14:40.433162 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-03-28 01:14:40.433168 | orchestrator | Saturday 28 March 2026 01:11:50 +0000 (0:00:20.154) 0:08:49.771 ******** 2026-03-28 01:14:40.433174 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:14:40.433181 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:14:40.433187 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:14:40.433193 | orchestrator | 2026-03-28 01:14:40.433199 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-03-28 01:14:40.433205 | orchestrator | Saturday 28 March 2026 01:12:15 +0000 (0:00:24.617) 0:09:14.389 ******** 2026-03-28 01:14:40.433212 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:14:40.433235 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:14:40.433242 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:14:40.433248 | orchestrator | 2026-03-28 01:14:40.433254 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-03-28 01:14:40.433261 | orchestrator | Saturday 28 March 2026 01:12:52 +0000 (0:00:36.639) 0:09:51.028 ******** 2026-03-28 01:14:40.433267 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:14:40.433272 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:14:40.433278 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:14:40.433285 | orchestrator | 2026-03-28 01:14:40.433291 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-03-28 01:14:40.433298 | orchestrator | Saturday 28 March 2026 01:12:53 +0000 (0:00:00.908) 0:09:51.937 ******** 2026-03-28 01:14:40.433304 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:14:40.433311 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:14:40.433317 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:14:40.433334 | orchestrator | 2026-03-28 01:14:40.433341 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-03-28 01:14:40.433348 | orchestrator | Saturday 28 March 2026 01:12:54 +0000 (0:00:01.057) 0:09:52.994 ******** 2026-03-28 01:14:40.433354 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:14:40.433360 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:14:40.433367 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:14:40.433373 | orchestrator | 2026-03-28 01:14:40.433380 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-03-28 01:14:40.433387 | orchestrator | Saturday 28 March 2026 01:13:16 +0000 (0:00:22.020) 0:10:15.015 ******** 2026-03-28 01:14:40.433393 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:14:40.433399 | orchestrator | 2026-03-28 01:14:40.433405 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-03-28 01:14:40.433412 | orchestrator | Saturday 28 March 2026 01:13:16 +0000 (0:00:00.141) 0:10:15.156 ******** 2026-03-28 01:14:40.433418 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:40.433424 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:14:40.433430 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:40.433437 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:14:40.433443 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:40.433454 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-03-28 01:14:40.433462 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-28 01:14:40.433469 | orchestrator | 2026-03-28 01:14:40.433475 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-03-28 01:14:40.433482 | orchestrator | Saturday 28 March 2026 01:13:40 +0000 (0:00:23.766) 0:10:38.923 ******** 2026-03-28 01:14:40.433488 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:40.433495 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:14:40.433501 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:14:40.433507 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:14:40.433514 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:40.433525 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:40.433532 | orchestrator | 2026-03-28 01:14:40.433538 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-03-28 01:14:40.433545 | orchestrator | Saturday 28 March 2026 01:13:51 +0000 (0:00:11.346) 0:10:50.270 ******** 2026-03-28 01:14:40.433551 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:14:40.433557 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:14:40.433564 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:40.433570 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:40.433576 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:40.433582 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2026-03-28 01:14:40.433589 | orchestrator | 2026-03-28 01:14:40.433595 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-28 01:14:40.433601 | orchestrator | Saturday 28 March 2026 01:13:57 +0000 (0:00:05.921) 0:10:56.192 ******** 2026-03-28 01:14:40.433607 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-28 01:14:40.433614 | orchestrator | 2026-03-28 01:14:40.433620 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-28 01:14:40.433627 | orchestrator | Saturday 28 March 2026 01:14:13 +0000 (0:00:16.583) 0:11:12.775 ******** 2026-03-28 01:14:40.433633 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-28 01:14:40.433639 | orchestrator | 2026-03-28 01:14:40.433646 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-03-28 01:14:40.433651 | orchestrator | Saturday 28 March 2026 01:14:15 +0000 (0:00:01.518) 0:11:14.293 ******** 2026-03-28 01:14:40.433658 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:14:40.433665 | orchestrator | 2026-03-28 01:14:40.433671 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-03-28 01:14:40.433684 | orchestrator | Saturday 28 March 2026 01:14:17 +0000 (0:00:01.557) 0:11:15.851 ******** 2026-03-28 01:14:40.433690 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-28 01:14:40.433697 | orchestrator | 2026-03-28 01:14:40.433703 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-03-28 01:14:40.433709 | orchestrator | 2026-03-28 01:14:40.433716 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-03-28 01:14:40.433722 | orchestrator | Saturday 28 March 2026 01:14:31 +0000 (0:00:14.303) 0:11:30.155 ******** 2026-03-28 01:14:40.433728 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:14:40.433735 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:14:40.433741 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:14:40.433747 | orchestrator | 2026-03-28 01:14:40.433754 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-03-28 01:14:40.433760 | orchestrator | 2026-03-28 01:14:40.433766 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-03-28 01:14:40.433772 | orchestrator | Saturday 28 March 2026 01:14:32 +0000 (0:00:01.249) 0:11:31.404 ******** 2026-03-28 01:14:40.433778 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:40.433784 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:40.433790 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:40.433797 | orchestrator | 2026-03-28 01:14:40.433803 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-03-28 01:14:40.433809 | orchestrator | 2026-03-28 01:14:40.433815 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-03-28 01:14:40.433822 | orchestrator | Saturday 28 March 2026 01:14:33 +0000 (0:00:00.674) 0:11:32.079 ******** 2026-03-28 01:14:40.433828 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-03-28 01:14:40.433834 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-28 01:14:40.433840 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-28 01:14:40.433847 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-03-28 01:14:40.433854 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-03-28 01:14:40.433860 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-03-28 01:14:40.433867 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:14:40.433873 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-03-28 01:14:40.433880 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-28 01:14:40.433887 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-28 01:14:40.433893 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-03-28 01:14:40.433899 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-03-28 01:14:40.433905 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-03-28 01:14:40.433912 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:14:40.433918 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-03-28 01:14:40.433925 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-28 01:14:40.433931 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-28 01:14:40.433937 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-03-28 01:14:40.433942 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-03-28 01:14:40.433948 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-03-28 01:14:40.433955 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:14:40.433966 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-03-28 01:14:40.433972 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-28 01:14:40.433979 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-28 01:14:40.433985 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-03-28 01:14:40.433997 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-03-28 01:14:40.434003 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-03-28 01:14:40.434009 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:40.434050 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-03-28 01:14:40.434060 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-28 01:14:40.434067 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-28 01:14:40.434074 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-03-28 01:14:40.434081 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-03-28 01:14:40.434088 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-03-28 01:14:40.434095 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:40.434102 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-03-28 01:14:40.434110 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-28 01:14:40.434116 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-28 01:14:40.434123 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-03-28 01:14:40.434130 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-03-28 01:14:40.434137 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-03-28 01:14:40.434144 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:40.434151 | orchestrator | 2026-03-28 01:14:40.434158 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-03-28 01:14:40.434165 | orchestrator | 2026-03-28 01:14:40.434172 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-03-28 01:14:40.434179 | orchestrator | Saturday 28 March 2026 01:14:34 +0000 (0:00:01.551) 0:11:33.631 ******** 2026-03-28 01:14:40.434186 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-03-28 01:14:40.434193 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-03-28 01:14:40.434201 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:40.434208 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-03-28 01:14:40.434229 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-03-28 01:14:40.434236 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:40.434242 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-03-28 01:14:40.434249 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-03-28 01:14:40.434254 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:40.434260 | orchestrator | 2026-03-28 01:14:40.434266 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-03-28 01:14:40.434272 | orchestrator | 2026-03-28 01:14:40.434279 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-03-28 01:14:40.434285 | orchestrator | Saturday 28 March 2026 01:14:35 +0000 (0:00:00.799) 0:11:34.430 ******** 2026-03-28 01:14:40.434291 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:40.434297 | orchestrator | 2026-03-28 01:14:40.434305 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-03-28 01:14:40.434309 | orchestrator | 2026-03-28 01:14:40.434314 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-03-28 01:14:40.434321 | orchestrator | Saturday 28 March 2026 01:14:36 +0000 (0:00:00.821) 0:11:35.252 ******** 2026-03-28 01:14:40.434328 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:14:40.434334 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:14:40.434340 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:14:40.434346 | orchestrator | 2026-03-28 01:14:40.434350 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:14:40.434354 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 01:14:40.434360 | orchestrator | testbed-node-0 : ok=59  changed=39  unreachable=0 failed=0 skipped=53  rescued=0 ignored=0 2026-03-28 01:14:40.434372 | orchestrator | testbed-node-1 : ok=32  changed=23  unreachable=0 failed=0 skipped=60  rescued=0 ignored=0 2026-03-28 01:14:40.434376 | orchestrator | testbed-node-2 : ok=32  changed=23  unreachable=0 failed=0 skipped=60  rescued=0 ignored=0 2026-03-28 01:14:40.434380 | orchestrator | testbed-node-3 : ok=52  changed=30  unreachable=0 failed=0 skipped=26  rescued=0 ignored=0 2026-03-28 01:14:40.434384 | orchestrator | testbed-node-4 : ok=41  changed=29  unreachable=0 failed=0 skipped=23  rescued=0 ignored=0 2026-03-28 01:14:40.434387 | orchestrator | testbed-node-5 : ok=41  changed=29  unreachable=0 failed=0 skipped=23  rescued=0 ignored=0 2026-03-28 01:14:40.434391 | orchestrator | 2026-03-28 01:14:40.434395 | orchestrator | 2026-03-28 01:14:40.434399 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:14:40.434403 | orchestrator | Saturday 28 March 2026 01:14:36 +0000 (0:00:00.461) 0:11:35.713 ******** 2026-03-28 01:14:40.434407 | orchestrator | =============================================================================== 2026-03-28 01:14:40.434414 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 36.64s 2026-03-28 01:14:40.434418 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 35.81s 2026-03-28 01:14:40.434422 | orchestrator | nova-cell : Get new Libvirt version ------------------------------------ 26.33s 2026-03-28 01:14:40.434426 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 25.92s 2026-03-28 01:14:40.434430 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 24.62s 2026-03-28 01:14:40.434438 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 23.77s 2026-03-28 01:14:40.434442 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 22.02s 2026-03-28 01:14:40.434446 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 21.85s 2026-03-28 01:14:40.434450 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 20.33s 2026-03-28 01:14:40.434453 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 20.15s 2026-03-28 01:14:40.434457 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 16.58s 2026-03-28 01:14:40.434461 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 16.45s 2026-03-28 01:14:40.434465 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 16.41s 2026-03-28 01:14:40.434468 | orchestrator | nova-cell : Create cell ------------------------------------------------ 15.61s 2026-03-28 01:14:40.434472 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------ 15.21s 2026-03-28 01:14:40.434476 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.43s 2026-03-28 01:14:40.434480 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 14.30s 2026-03-28 01:14:40.434484 | orchestrator | nova : Restart nova-api container -------------------------------------- 13.40s 2026-03-28 01:14:40.434487 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 11.35s 2026-03-28 01:14:40.434491 | orchestrator | nova : Copying over nova.conf ------------------------------------------ 11.29s 2026-03-28 01:14:40.434495 | orchestrator | 2026-03-28 01:14:40 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:14:40.434499 | orchestrator | 2026-03-28 01:14:40 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:14:43.481470 | orchestrator | 2026-03-28 01:14:43 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:14:43.481568 | orchestrator | 2026-03-28 01:14:43 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:14:46.527989 | orchestrator | 2026-03-28 01:14:46 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:14:46.528084 | orchestrator | 2026-03-28 01:14:46 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:14:49.572464 | orchestrator | 2026-03-28 01:14:49 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:14:49.572568 | orchestrator | 2026-03-28 01:14:49 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:14:52.614295 | orchestrator | 2026-03-28 01:14:52 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:14:52.614373 | orchestrator | 2026-03-28 01:14:52 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:14:55.665401 | orchestrator | 2026-03-28 01:14:55 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:14:55.665528 | orchestrator | 2026-03-28 01:14:55 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:14:58.727821 | orchestrator | 2026-03-28 01:14:58 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:14:58.727922 | orchestrator | 2026-03-28 01:14:58 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:15:01.764704 | orchestrator | 2026-03-28 01:15:01 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:15:01.764826 | orchestrator | 2026-03-28 01:15:01 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:15:04.811987 | orchestrator | 2026-03-28 01:15:04 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:15:04.812104 | orchestrator | 2026-03-28 01:15:04 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:15:07.851572 | orchestrator | 2026-03-28 01:15:07 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:15:07.851648 | orchestrator | 2026-03-28 01:15:07 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:15:10.904584 | orchestrator | 2026-03-28 01:15:10 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:15:10.904704 | orchestrator | 2026-03-28 01:15:10 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:15:13.949042 | orchestrator | 2026-03-28 01:15:13 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:15:13.949335 | orchestrator | 2026-03-28 01:15:13 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:15:16.990634 | orchestrator | 2026-03-28 01:15:16 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:15:16.990737 | orchestrator | 2026-03-28 01:15:16 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:15:20.042358 | orchestrator | 2026-03-28 01:15:20 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:15:20.042428 | orchestrator | 2026-03-28 01:15:20 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:15:23.082826 | orchestrator | 2026-03-28 01:15:23 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:15:23.082955 | orchestrator | 2026-03-28 01:15:23 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:15:26.124423 | orchestrator | 2026-03-28 01:15:26 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:15:26.124543 | orchestrator | 2026-03-28 01:15:26 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:15:29.161948 | orchestrator | 2026-03-28 01:15:29 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:15:29.162219 | orchestrator | 2026-03-28 01:15:29 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:15:32.199547 | orchestrator | 2026-03-28 01:15:32 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:15:32.199630 | orchestrator | 2026-03-28 01:15:32 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:15:35.246383 | orchestrator | 2026-03-28 01:15:35 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:15:35.246475 | orchestrator | 2026-03-28 01:15:35 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:15:38.300527 | orchestrator | 2026-03-28 01:15:38 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:15:38.300666 | orchestrator | 2026-03-28 01:15:38 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:15:41.340970 | orchestrator | 2026-03-28 01:15:41 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:15:41.341080 | orchestrator | 2026-03-28 01:15:41 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:15:44.392439 | orchestrator | 2026-03-28 01:15:44 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:15:44.392563 | orchestrator | 2026-03-28 01:15:44 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:15:47.436976 | orchestrator | 2026-03-28 01:15:47 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:15:47.437252 | orchestrator | 2026-03-28 01:15:47 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:15:50.481238 | orchestrator | 2026-03-28 01:15:50 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:15:50.481314 | orchestrator | 2026-03-28 01:15:50 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:15:53.528497 | orchestrator | 2026-03-28 01:15:53 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:15:53.528565 | orchestrator | 2026-03-28 01:15:53 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:15:56.574688 | orchestrator | 2026-03-28 01:15:56 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:15:56.574800 | orchestrator | 2026-03-28 01:15:56 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:15:59.626768 | orchestrator | 2026-03-28 01:15:59 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:15:59.626922 | orchestrator | 2026-03-28 01:15:59 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:16:02.673975 | orchestrator | 2026-03-28 01:16:02 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:16:02.674895 | orchestrator | 2026-03-28 01:16:02 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:16:05.725504 | orchestrator | 2026-03-28 01:16:05 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:16:05.725596 | orchestrator | 2026-03-28 01:16:05 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:16:08.765768 | orchestrator | 2026-03-28 01:16:08 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:16:08.765887 | orchestrator | 2026-03-28 01:16:08 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:16:11.806073 | orchestrator | 2026-03-28 01:16:11 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:16:11.806268 | orchestrator | 2026-03-28 01:16:11 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:16:14.851750 | orchestrator | 2026-03-28 01:16:14 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:16:14.851901 | orchestrator | 2026-03-28 01:16:14 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:16:17.892340 | orchestrator | 2026-03-28 01:16:17 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state STARTED 2026-03-28 01:16:17.892414 | orchestrator | 2026-03-28 01:16:17 | INFO  | Wait 1 second(s) until the next check 2026-03-28 01:16:20.938000 | orchestrator | 2026-03-28 01:16:20 | INFO  | Task 31549132-a151-4e85-98b0-4531a2cb0af1 is in state SUCCESS 2026-03-28 01:16:20.940499 | orchestrator | 2026-03-28 01:16:20.940552 | orchestrator | 2026-03-28 01:16:20.940565 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 01:16:20.940575 | orchestrator | 2026-03-28 01:16:20.940584 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 01:16:20.940594 | orchestrator | Saturday 28 March 2026 01:10:55 +0000 (0:00:00.381) 0:00:00.382 ******** 2026-03-28 01:16:20.940603 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:16:20.940614 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:16:20.940622 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:16:20.940631 | orchestrator | 2026-03-28 01:16:20.940640 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 01:16:20.940649 | orchestrator | Saturday 28 March 2026 01:10:55 +0000 (0:00:00.447) 0:00:00.829 ******** 2026-03-28 01:16:20.940659 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-03-28 01:16:20.940668 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-03-28 01:16:20.940677 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-03-28 01:16:20.940685 | orchestrator | 2026-03-28 01:16:20.940694 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-03-28 01:16:20.940703 | orchestrator | 2026-03-28 01:16:20.940712 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-28 01:16:20.940720 | orchestrator | Saturday 28 March 2026 01:10:56 +0000 (0:00:00.632) 0:00:01.462 ******** 2026-03-28 01:16:20.940729 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:16:20.940738 | orchestrator | 2026-03-28 01:16:20.940747 | orchestrator | TASK [service-ks-register : octavia | Creating/deleting services] ************** 2026-03-28 01:16:20.940756 | orchestrator | Saturday 28 March 2026 01:10:57 +0000 (0:00:01.788) 0:00:03.250 ******** 2026-03-28 01:16:20.940765 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-03-28 01:16:20.940774 | orchestrator | 2026-03-28 01:16:20.940782 | orchestrator | TASK [service-ks-register : octavia | Creating/deleting endpoints] ************* 2026-03-28 01:16:20.940791 | orchestrator | Saturday 28 March 2026 01:11:02 +0000 (0:00:04.726) 0:00:07.977 ******** 2026-03-28 01:16:20.940799 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-03-28 01:16:20.940808 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-03-28 01:16:20.940817 | orchestrator | 2026-03-28 01:16:20.940826 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-03-28 01:16:20.940834 | orchestrator | Saturday 28 March 2026 01:11:09 +0000 (0:00:06.955) 0:00:14.934 ******** 2026-03-28 01:16:20.940843 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-28 01:16:20.940852 | orchestrator | 2026-03-28 01:16:20.940861 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-03-28 01:16:20.940869 | orchestrator | Saturday 28 March 2026 01:11:13 +0000 (0:00:03.989) 0:00:18.923 ******** 2026-03-28 01:16:20.940878 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-28 01:16:20.940887 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-28 01:16:20.940896 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-28 01:16:20.940905 | orchestrator | 2026-03-28 01:16:20.940914 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-03-28 01:16:20.940946 | orchestrator | Saturday 28 March 2026 01:11:23 +0000 (0:00:09.478) 0:00:28.402 ******** 2026-03-28 01:16:20.940961 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-28 01:16:20.940975 | orchestrator | 2026-03-28 01:16:20.940988 | orchestrator | TASK [service-ks-register : octavia | Granting/revoking user roles] ************ 2026-03-28 01:16:20.941002 | orchestrator | Saturday 28 March 2026 01:11:26 +0000 (0:00:03.967) 0:00:32.370 ******** 2026-03-28 01:16:20.941016 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-28 01:16:20.941030 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-28 01:16:20.941045 | orchestrator | 2026-03-28 01:16:20.941059 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-03-28 01:16:20.941073 | orchestrator | Saturday 28 March 2026 01:11:36 +0000 (0:00:09.073) 0:00:41.443 ******** 2026-03-28 01:16:20.941082 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-03-28 01:16:20.941135 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-03-28 01:16:20.941145 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-03-28 01:16:20.941155 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-03-28 01:16:20.941165 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-03-28 01:16:20.941175 | orchestrator | 2026-03-28 01:16:20.941184 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-28 01:16:20.941194 | orchestrator | Saturday 28 March 2026 01:11:55 +0000 (0:00:19.736) 0:01:01.180 ******** 2026-03-28 01:16:20.941216 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:16:20.941227 | orchestrator | 2026-03-28 01:16:20.941238 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-03-28 01:16:20.941247 | orchestrator | Saturday 28 March 2026 01:11:56 +0000 (0:00:00.967) 0:01:02.147 ******** 2026-03-28 01:16:20.941258 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:16:20.941267 | orchestrator | 2026-03-28 01:16:20.941276 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-03-28 01:16:20.941286 | orchestrator | Saturday 28 March 2026 01:12:03 +0000 (0:00:06.320) 0:01:08.468 ******** 2026-03-28 01:16:20.941296 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:16:20.941305 | orchestrator | 2026-03-28 01:16:20.941315 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-28 01:16:20.941442 | orchestrator | Saturday 28 March 2026 01:12:08 +0000 (0:00:05.268) 0:01:13.737 ******** 2026-03-28 01:16:20.941453 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:16:20.941462 | orchestrator | 2026-03-28 01:16:20.941471 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-03-28 01:16:20.941479 | orchestrator | Saturday 28 March 2026 01:12:12 +0000 (0:00:03.938) 0:01:17.676 ******** 2026-03-28 01:16:20.941488 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-28 01:16:20.941497 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-28 01:16:20.941505 | orchestrator | 2026-03-28 01:16:20.941514 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-03-28 01:16:20.941523 | orchestrator | Saturday 28 March 2026 01:12:24 +0000 (0:00:12.702) 0:01:30.379 ******** 2026-03-28 01:16:20.941536 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-03-28 01:16:20.941550 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-03-28 01:16:20.941569 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-03-28 01:16:20.941591 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-03-28 01:16:20.941620 | orchestrator | 2026-03-28 01:16:20.941634 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-03-28 01:16:20.941647 | orchestrator | Saturday 28 March 2026 01:12:43 +0000 (0:00:18.713) 0:01:49.092 ******** 2026-03-28 01:16:20.942261 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:16:20.942285 | orchestrator | 2026-03-28 01:16:20.942295 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-03-28 01:16:20.942304 | orchestrator | Saturday 28 March 2026 01:12:50 +0000 (0:00:06.658) 0:01:55.750 ******** 2026-03-28 01:16:20.942610 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:16:20.942621 | orchestrator | 2026-03-28 01:16:20.942630 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-03-28 01:16:20.942639 | orchestrator | Saturday 28 March 2026 01:12:56 +0000 (0:00:06.363) 0:02:02.113 ******** 2026-03-28 01:16:20.942647 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:16:20.942656 | orchestrator | 2026-03-28 01:16:20.942665 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-03-28 01:16:20.942673 | orchestrator | Saturday 28 March 2026 01:12:57 +0000 (0:00:00.698) 0:02:02.812 ******** 2026-03-28 01:16:20.942682 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:16:20.942691 | orchestrator | 2026-03-28 01:16:20.942700 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-28 01:16:20.942709 | orchestrator | Saturday 28 March 2026 01:13:01 +0000 (0:00:04.535) 0:02:07.347 ******** 2026-03-28 01:16:20.942717 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:16:20.942726 | orchestrator | 2026-03-28 01:16:20.942735 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-03-28 01:16:20.942743 | orchestrator | Saturday 28 March 2026 01:13:03 +0000 (0:00:01.281) 0:02:08.629 ******** 2026-03-28 01:16:20.942752 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:16:20.942761 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:16:20.942769 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:16:20.942778 | orchestrator | 2026-03-28 01:16:20.942787 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-03-28 01:16:20.942796 | orchestrator | Saturday 28 March 2026 01:13:09 +0000 (0:00:06.689) 0:02:15.319 ******** 2026-03-28 01:16:20.942804 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:16:20.942813 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:16:20.942836 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:16:20.942846 | orchestrator | 2026-03-28 01:16:20.942855 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-03-28 01:16:20.942864 | orchestrator | Saturday 28 March 2026 01:13:15 +0000 (0:00:05.533) 0:02:20.852 ******** 2026-03-28 01:16:20.942875 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:16:20.942891 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:16:20.942906 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:16:20.942920 | orchestrator | 2026-03-28 01:16:20.942934 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-03-28 01:16:20.942947 | orchestrator | Saturday 28 March 2026 01:13:17 +0000 (0:00:01.822) 0:02:22.674 ******** 2026-03-28 01:16:20.942961 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:16:20.942974 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:16:20.942988 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:16:20.943004 | orchestrator | 2026-03-28 01:16:20.943019 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-03-28 01:16:20.943035 | orchestrator | Saturday 28 March 2026 01:13:19 +0000 (0:00:02.576) 0:02:25.251 ******** 2026-03-28 01:16:20.943049 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:16:20.943076 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:16:20.943123 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:16:20.943140 | orchestrator | 2026-03-28 01:16:20.943154 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-03-28 01:16:20.943168 | orchestrator | Saturday 28 March 2026 01:13:21 +0000 (0:00:01.934) 0:02:27.186 ******** 2026-03-28 01:16:20.943198 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:16:20.943212 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:16:20.943228 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:16:20.943243 | orchestrator | 2026-03-28 01:16:20.943259 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-03-28 01:16:20.943274 | orchestrator | Saturday 28 March 2026 01:13:23 +0000 (0:00:01.269) 0:02:28.455 ******** 2026-03-28 01:16:20.943288 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:16:20.943298 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:16:20.943307 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:16:20.943317 | orchestrator | 2026-03-28 01:16:20.943374 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-03-28 01:16:20.943386 | orchestrator | Saturday 28 March 2026 01:13:25 +0000 (0:00:02.541) 0:02:30.997 ******** 2026-03-28 01:16:20.943396 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:16:20.943406 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:16:20.943416 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:16:20.943426 | orchestrator | 2026-03-28 01:16:20.943436 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-03-28 01:16:20.943445 | orchestrator | Saturday 28 March 2026 01:13:27 +0000 (0:00:01.750) 0:02:32.748 ******** 2026-03-28 01:16:20.943455 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:16:20.943465 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:16:20.943479 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:16:20.943494 | orchestrator | 2026-03-28 01:16:20.943509 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-03-28 01:16:20.943524 | orchestrator | Saturday 28 March 2026 01:13:27 +0000 (0:00:00.614) 0:02:33.363 ******** 2026-03-28 01:16:20.943539 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:16:20.943555 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:16:20.943570 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:16:20.943585 | orchestrator | 2026-03-28 01:16:20.943600 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-28 01:16:20.943615 | orchestrator | Saturday 28 March 2026 01:13:30 +0000 (0:00:02.999) 0:02:36.362 ******** 2026-03-28 01:16:20.943631 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:16:20.943646 | orchestrator | 2026-03-28 01:16:20.943660 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-03-28 01:16:20.943673 | orchestrator | Saturday 28 March 2026 01:13:31 +0000 (0:00:00.766) 0:02:37.129 ******** 2026-03-28 01:16:20.943682 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:16:20.943690 | orchestrator | 2026-03-28 01:16:20.943699 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-28 01:16:20.943707 | orchestrator | Saturday 28 March 2026 01:13:36 +0000 (0:00:04.443) 0:02:41.573 ******** 2026-03-28 01:16:20.943716 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:16:20.943724 | orchestrator | 2026-03-28 01:16:20.943732 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-03-28 01:16:20.943741 | orchestrator | Saturday 28 March 2026 01:13:40 +0000 (0:00:04.006) 0:02:45.580 ******** 2026-03-28 01:16:20.943749 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-28 01:16:20.943758 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-28 01:16:20.943767 | orchestrator | 2026-03-28 01:16:20.943775 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-03-28 01:16:20.943784 | orchestrator | Saturday 28 March 2026 01:13:48 +0000 (0:00:08.628) 0:02:54.208 ******** 2026-03-28 01:16:20.943792 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:16:20.943800 | orchestrator | 2026-03-28 01:16:20.943809 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-03-28 01:16:20.943817 | orchestrator | Saturday 28 March 2026 01:13:53 +0000 (0:00:04.304) 0:02:58.513 ******** 2026-03-28 01:16:20.943826 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:16:20.943847 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:16:20.943856 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:16:20.943864 | orchestrator | 2026-03-28 01:16:20.943873 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-03-28 01:16:20.943881 | orchestrator | Saturday 28 March 2026 01:13:53 +0000 (0:00:00.599) 0:02:59.112 ******** 2026-03-28 01:16:20.943893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 01:16:20.943946 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 01:16:20.943958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 01:16:20.943968 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-28 01:16:20.943979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-28 01:16:20.943994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-28 01:16:20.944003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-28 01:16:20.944017 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-28 01:16:20.944051 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-28 01:16:20.944062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-28 01:16:20.944073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-28 01:16:20.944139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-28 01:16:20.944152 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:16:20.944167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:16:20.944203 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:16:20.944214 | orchestrator | 2026-03-28 01:16:20.944222 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-03-28 01:16:20.944231 | orchestrator | Saturday 28 March 2026 01:13:57 +0000 (0:00:04.150) 0:03:03.263 ******** 2026-03-28 01:16:20.944240 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:16:20.944249 | orchestrator | 2026-03-28 01:16:20.944257 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-03-28 01:16:20.944266 | orchestrator | Saturday 28 March 2026 01:13:58 +0000 (0:00:00.128) 0:03:03.392 ******** 2026-03-28 01:16:20.944274 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:16:20.944282 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:16:20.944289 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:16:20.944297 | orchestrator | 2026-03-28 01:16:20.944305 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-03-28 01:16:20.944312 | orchestrator | Saturday 28 March 2026 01:13:58 +0000 (0:00:00.313) 0:03:03.705 ******** 2026-03-28 01:16:20.944321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-28 01:16:20.944342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-28 01:16:20.944356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-28 01:16:20.944370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-28 01:16:20.944390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:16:20.944404 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:16:20.944446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-28 01:16:20.944467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-28 01:16:20.944482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-28 01:16:20.944495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-28 01:16:20.944508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:16:20.944521 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:16:20.944578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-28 01:16:20.944594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-28 01:16:20.944614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-28 01:16:20.944627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-28 01:16:20.944639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:16:20.944652 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:16:20.944664 | orchestrator | 2026-03-28 01:16:20.944677 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-28 01:16:20.944690 | orchestrator | Saturday 28 March 2026 01:13:59 +0000 (0:00:00.772) 0:03:04.477 ******** 2026-03-28 01:16:20.944703 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:16:20.944716 | orchestrator | 2026-03-28 01:16:20.944727 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-03-28 01:16:20.944740 | orchestrator | Saturday 28 March 2026 01:13:59 +0000 (0:00:00.791) 0:03:05.269 ******** 2026-03-28 01:16:20.944760 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 01:16:20.944816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 01:16:20.944842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 01:16:20.944856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-28 01:16:20.944870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-28 01:16:20.944884 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-28 01:16:20.944903 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-28 01:16:20.944952 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-28 01:16:20.944977 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-28 01:16:20.944991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-28 01:16:20.945005 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-28 01:16:20.945018 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-28 01:16:20.945035 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:16:20.945084 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:16:20.945141 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:16:20.945156 | orchestrator | 2026-03-28 01:16:20.945170 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-03-28 01:16:20.945184 | orchestrator | Saturday 28 March 2026 01:14:05 +0000 (0:00:05.494) 0:03:10.764 ******** 2026-03-28 01:16:20.945199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-28 01:16:20.945213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-28 01:16:20.945227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-28 01:16:20.945249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-28 01:16:20.945303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:16:20.945330 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:16:20.945345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-28 01:16:20.945360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-28 01:16:20.945373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-28 01:16:20.945386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-28 01:16:20.945407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:16:20.945421 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:16:20.945444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-28 01:16:20.945467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-28 01:16:20.945481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-28 01:16:20.945496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-28 01:16:20.945508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:16:20.945520 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:16:20.945533 | orchestrator | 2026-03-28 01:16:20.945548 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-03-28 01:16:20.945562 | orchestrator | Saturday 28 March 2026 01:14:06 +0000 (0:00:00.746) 0:03:11.511 ******** 2026-03-28 01:16:20.945582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-28 01:16:20.945616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-28 01:16:20.945630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-28 01:16:20.945644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-28 01:16:20.945654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:16:20.945663 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:16:20.945671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-28 01:16:20.945693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-28 01:16:20.945708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-28 01:16:20.945717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-28 01:16:20.945725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:16:20.945733 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:16:20.945742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-28 01:16:20.945750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-28 01:16:20.945767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-28 01:16:20.945783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-28 01:16:20.945791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:16:20.945800 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:16:20.945808 | orchestrator | 2026-03-28 01:16:20.945815 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-03-28 01:16:20.945823 | orchestrator | Saturday 28 March 2026 01:14:07 +0000 (0:00:01.209) 0:03:12.720 ******** 2026-03-28 01:16:20.945832 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 01:16:20.945841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 01:16:20.945858 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 01:16:20.945871 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-28 01:16:20.945880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-28 01:16:20.945888 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-28 01:16:20.945896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-28 01:16:20.945904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-28 01:16:20.945918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-28 01:16:20.945931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-28 01:16:20.945945 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-28 01:16:20.945953 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-28 01:16:20.945961 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:16:20.945969 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:16:20.945978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:16:20.945990 | orchestrator | 2026-03-28 01:16:20.945999 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-03-28 01:16:20.946007 | orchestrator | Saturday 28 March 2026 01:14:12 +0000 (0:00:05.613) 0:03:18.334 ******** 2026-03-28 01:16:20.946044 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-28 01:16:20.946056 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-28 01:16:20.946064 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-28 01:16:20.946072 | orchestrator | 2026-03-28 01:16:20.946080 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-03-28 01:16:20.946156 | orchestrator | Saturday 28 March 2026 01:14:14 +0000 (0:00:01.783) 0:03:20.117 ******** 2026-03-28 01:16:20.946183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 01:16:20.946197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 01:16:20.946212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 01:16:20.946228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-28 01:16:20.946236 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-28 01:16:20.946248 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-28 01:16:20.946261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-28 01:16:20.946269 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-28 01:16:20.946277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-28 01:16:20.946285 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-28 01:16:20.946296 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-28 01:16:20.946307 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-28 01:16:20.946314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:16:20.946326 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:16:20.946333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:16:20.946340 | orchestrator | 2026-03-28 01:16:20.946347 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-03-28 01:16:20.946353 | orchestrator | Saturday 28 March 2026 01:14:33 +0000 (0:00:18.448) 0:03:38.565 ******** 2026-03-28 01:16:20.946360 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:16:20.946366 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:16:20.946373 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:16:20.946384 | orchestrator | 2026-03-28 01:16:20.946391 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-03-28 01:16:20.946397 | orchestrator | Saturday 28 March 2026 01:14:35 +0000 (0:00:01.994) 0:03:40.560 ******** 2026-03-28 01:16:20.946404 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-28 01:16:20.946410 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-28 01:16:20.946420 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-28 01:16:20.946430 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-28 01:16:20.946441 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-28 01:16:20.946451 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-28 01:16:20.946462 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-28 01:16:20.946473 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-28 01:16:20.946484 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-28 01:16:20.946493 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-28 01:16:20.946499 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-28 01:16:20.946506 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-28 01:16:20.946512 | orchestrator | 2026-03-28 01:16:20.946519 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-03-28 01:16:20.946525 | orchestrator | Saturday 28 March 2026 01:14:40 +0000 (0:00:05.436) 0:03:45.996 ******** 2026-03-28 01:16:20.946532 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-28 01:16:20.946538 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-28 01:16:20.946545 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-28 01:16:20.946551 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-28 01:16:20.946558 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-28 01:16:20.946564 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-28 01:16:20.946571 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-28 01:16:20.946577 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-28 01:16:20.946584 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-28 01:16:20.946590 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-28 01:16:20.946597 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-28 01:16:20.946603 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-28 01:16:20.946610 | orchestrator | 2026-03-28 01:16:20.946620 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-03-28 01:16:20.946627 | orchestrator | Saturday 28 March 2026 01:14:46 +0000 (0:00:05.427) 0:03:51.424 ******** 2026-03-28 01:16:20.946633 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-28 01:16:20.946640 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-28 01:16:20.946646 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-28 01:16:20.946652 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-28 01:16:20.946659 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-28 01:16:20.946665 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-28 01:16:20.946672 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-28 01:16:20.946678 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-28 01:16:20.946689 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-28 01:16:20.946696 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-28 01:16:20.946702 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-28 01:16:20.946716 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-28 01:16:20.946723 | orchestrator | 2026-03-28 01:16:20.946730 | orchestrator | TASK [service-check-containers : octavia | Check containers] ******************* 2026-03-28 01:16:20.946736 | orchestrator | Saturday 28 March 2026 01:14:51 +0000 (0:00:05.522) 0:03:56.947 ******** 2026-03-28 01:16:20.946743 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 01:16:20.946750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 01:16:20.946757 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-28 01:16:20.946768 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-28 01:16:20.946779 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-28 01:16:20.946790 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-28 01:16:20.946797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-28 01:16:20.946804 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-28 01:16:20.946810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-28 01:16:20.946817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-28 01:16:20.946829 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-28 01:16:20.946852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-28 01:16:20.946863 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:16:20.946875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:16:20.946887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-28 01:16:20.946898 | orchestrator | 2026-03-28 01:16:20.946909 | orchestrator | TASK [service-check-containers : octavia | Notify handlers to restart containers] *** 2026-03-28 01:16:20.946919 | orchestrator | Saturday 28 March 2026 01:14:55 +0000 (0:00:04.288) 0:04:01.236 ******** 2026-03-28 01:16:20.946925 | orchestrator | changed: [testbed-node-0] => { 2026-03-28 01:16:20.946932 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 01:16:20.946938 | orchestrator | } 2026-03-28 01:16:20.946945 | orchestrator | changed: [testbed-node-1] => { 2026-03-28 01:16:20.946952 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 01:16:20.946958 | orchestrator | } 2026-03-28 01:16:20.946964 | orchestrator | changed: [testbed-node-2] => { 2026-03-28 01:16:20.946971 | orchestrator |  "msg": "Notifying handlers" 2026-03-28 01:16:20.946982 | orchestrator | } 2026-03-28 01:16:20.946993 | orchestrator | 2026-03-28 01:16:20.947004 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-28 01:16:20.947014 | orchestrator | Saturday 28 March 2026 01:14:56 +0000 (0:00:00.604) 0:04:01.840 ******** 2026-03-28 01:16:20.947030 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-28 01:16:20.947058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-28 01:16:20.947070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-28 01:16:20.947082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-28 01:16:20.947114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:16:20.947125 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:16:20.947136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-28 01:16:20.947162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-28 01:16:20.947180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-28 01:16:20.947192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-28 01:16:20.947204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:16:20.947215 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:16:20.947222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-28 01:16:20.947229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-28 01:16:20.947244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-28 01:16:20.947255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-28 01:16:20.947262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-28 01:16:20.947269 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:16:20.947276 | orchestrator | 2026-03-28 01:16:20.947283 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-28 01:16:20.947289 | orchestrator | Saturday 28 March 2026 01:14:57 +0000 (0:00:00.954) 0:04:02.794 ******** 2026-03-28 01:16:20.947296 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:16:20.947303 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:16:20.947309 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:16:20.947316 | orchestrator | 2026-03-28 01:16:20.947322 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-03-28 01:16:20.947329 | orchestrator | Saturday 28 March 2026 01:14:57 +0000 (0:00:00.312) 0:04:03.106 ******** 2026-03-28 01:16:20.947335 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:16:20.947342 | orchestrator | 2026-03-28 01:16:20.947349 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-03-28 01:16:20.947355 | orchestrator | Saturday 28 March 2026 01:15:00 +0000 (0:00:02.488) 0:04:05.595 ******** 2026-03-28 01:16:20.947362 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:16:20.947368 | orchestrator | 2026-03-28 01:16:20.947375 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-03-28 01:16:20.947381 | orchestrator | Saturday 28 March 2026 01:15:02 +0000 (0:00:02.475) 0:04:08.070 ******** 2026-03-28 01:16:20.947388 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:16:20.947395 | orchestrator | 2026-03-28 01:16:20.947401 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-03-28 01:16:20.947408 | orchestrator | Saturday 28 March 2026 01:15:05 +0000 (0:00:03.111) 0:04:11.182 ******** 2026-03-28 01:16:20.947414 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:16:20.947421 | orchestrator | 2026-03-28 01:16:20.947427 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-03-28 01:16:20.947434 | orchestrator | Saturday 28 March 2026 01:15:08 +0000 (0:00:02.664) 0:04:13.847 ******** 2026-03-28 01:16:20.947441 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:16:20.947447 | orchestrator | 2026-03-28 01:16:20.947458 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-28 01:16:20.947464 | orchestrator | Saturday 28 March 2026 01:15:34 +0000 (0:00:25.865) 0:04:39.712 ******** 2026-03-28 01:16:20.947471 | orchestrator | 2026-03-28 01:16:20.947477 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-28 01:16:20.947484 | orchestrator | Saturday 28 March 2026 01:15:34 +0000 (0:00:00.072) 0:04:39.785 ******** 2026-03-28 01:16:20.947490 | orchestrator | 2026-03-28 01:16:20.947497 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-28 01:16:20.947503 | orchestrator | Saturday 28 March 2026 01:15:34 +0000 (0:00:00.079) 0:04:39.865 ******** 2026-03-28 01:16:20.947510 | orchestrator | 2026-03-28 01:16:20.947516 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-03-28 01:16:20.947523 | orchestrator | Saturday 28 March 2026 01:15:34 +0000 (0:00:00.075) 0:04:39.940 ******** 2026-03-28 01:16:20.947529 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:16:20.947536 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:16:20.947543 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:16:20.947549 | orchestrator | 2026-03-28 01:16:20.947556 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-03-28 01:16:20.947562 | orchestrator | Saturday 28 March 2026 01:15:46 +0000 (0:00:12.230) 0:04:52.170 ******** 2026-03-28 01:16:20.947569 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:16:20.947576 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:16:20.947583 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:16:20.947589 | orchestrator | 2026-03-28 01:16:20.947596 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-03-28 01:16:20.947602 | orchestrator | Saturday 28 March 2026 01:16:00 +0000 (0:00:13.252) 0:05:05.423 ******** 2026-03-28 01:16:20.947609 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:16:20.947615 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:16:20.947622 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:16:20.947628 | orchestrator | 2026-03-28 01:16:20.947635 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-03-28 01:16:20.947641 | orchestrator | Saturday 28 March 2026 01:16:05 +0000 (0:00:05.901) 0:05:11.324 ******** 2026-03-28 01:16:20.947651 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:16:20.947658 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:16:20.947664 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:16:20.947671 | orchestrator | 2026-03-28 01:16:20.947678 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-03-28 01:16:20.947684 | orchestrator | Saturday 28 March 2026 01:16:14 +0000 (0:00:08.548) 0:05:19.873 ******** 2026-03-28 01:16:20.947691 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:16:20.947698 | orchestrator | changed: [testbed-node-1] 2026-03-28 01:16:20.947704 | orchestrator | changed: [testbed-node-2] 2026-03-28 01:16:20.947711 | orchestrator | 2026-03-28 01:16:20.947717 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:16:20.947724 | orchestrator | testbed-node-0 : ok=58  changed=39  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-28 01:16:20.947735 | orchestrator | testbed-node-1 : ok=34  changed=23  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-28 01:16:20.947742 | orchestrator | testbed-node-2 : ok=34  changed=23  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-28 01:16:20.947749 | orchestrator | 2026-03-28 01:16:20.947756 | orchestrator | 2026-03-28 01:16:20.947762 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:16:20.947769 | orchestrator | Saturday 28 March 2026 01:16:19 +0000 (0:00:05.417) 0:05:25.290 ******** 2026-03-28 01:16:20.947775 | orchestrator | =============================================================================== 2026-03-28 01:16:20.947782 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 25.87s 2026-03-28 01:16:20.947793 | orchestrator | octavia : Adding octavia related roles --------------------------------- 19.74s 2026-03-28 01:16:20.947799 | orchestrator | octavia : Add rules for security groups -------------------------------- 18.71s 2026-03-28 01:16:20.947806 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 18.45s 2026-03-28 01:16:20.947812 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 13.25s 2026-03-28 01:16:20.947819 | orchestrator | octavia : Create security groups for octavia --------------------------- 12.70s 2026-03-28 01:16:20.947826 | orchestrator | octavia : Restart octavia-api container -------------------------------- 12.23s 2026-03-28 01:16:20.947832 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 9.48s 2026-03-28 01:16:20.947839 | orchestrator | service-ks-register : octavia | Granting/revoking user roles ------------ 9.07s 2026-03-28 01:16:20.947845 | orchestrator | octavia : Get security groups for octavia ------------------------------- 8.63s 2026-03-28 01:16:20.947852 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 8.55s 2026-03-28 01:16:20.947858 | orchestrator | service-ks-register : octavia | Creating/deleting endpoints ------------- 6.96s 2026-03-28 01:16:20.947865 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 6.69s 2026-03-28 01:16:20.947872 | orchestrator | octavia : Create loadbalancer management network ------------------------ 6.66s 2026-03-28 01:16:20.947878 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 6.36s 2026-03-28 01:16:20.947885 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 6.32s 2026-03-28 01:16:20.947891 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 5.90s 2026-03-28 01:16:20.947901 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.61s 2026-03-28 01:16:20.947912 | orchestrator | octavia : Update Octavia health manager port host_id -------------------- 5.53s 2026-03-28 01:16:20.947923 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.52s 2026-03-28 01:16:20.947933 | orchestrator | 2026-03-28 01:16:20 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-28 01:16:23.992524 | orchestrator | 2026-03-28 01:16:23 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-28 01:16:27.033044 | orchestrator | 2026-03-28 01:16:27 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-28 01:16:30.073232 | orchestrator | 2026-03-28 01:16:30 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-28 01:16:33.117182 | orchestrator | 2026-03-28 01:16:33 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-28 01:16:36.162548 | orchestrator | 2026-03-28 01:16:36 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-28 01:16:39.198424 | orchestrator | 2026-03-28 01:16:39 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-28 01:16:42.236717 | orchestrator | 2026-03-28 01:16:42 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-28 01:16:45.285566 | orchestrator | 2026-03-28 01:16:45 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-28 01:16:48.318497 | orchestrator | 2026-03-28 01:16:48 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-28 01:16:51.364179 | orchestrator | 2026-03-28 01:16:51 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-28 01:16:54.411942 | orchestrator | 2026-03-28 01:16:54 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-28 01:16:57.450660 | orchestrator | 2026-03-28 01:16:57 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-28 01:17:00.490306 | orchestrator | 2026-03-28 01:17:00 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-28 01:17:03.534489 | orchestrator | 2026-03-28 01:17:03 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-28 01:17:06.581740 | orchestrator | 2026-03-28 01:17:06 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-28 01:17:09.627578 | orchestrator | 2026-03-28 01:17:09 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-28 01:17:12.671291 | orchestrator | 2026-03-28 01:17:12 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-28 01:17:15.715606 | orchestrator | 2026-03-28 01:17:15 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-28 01:17:18.759239 | orchestrator | 2026-03-28 01:17:18 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-28 01:17:21.803081 | orchestrator | 2026-03-28 01:17:22.029526 | orchestrator | 2026-03-28 01:17:22.034335 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Sat Mar 28 01:17:22 UTC 2026 2026-03-28 01:17:22.034495 | orchestrator | 2026-03-28 01:17:22.515386 | orchestrator | ok: Runtime: 0:37:34.398799 2026-03-28 01:17:22.853287 | 2026-03-28 01:17:22.853499 | TASK [Bootstrap services] 2026-03-28 01:17:23.636927 | orchestrator | 2026-03-28 01:17:23.637135 | orchestrator | # BOOTSTRAP 2026-03-28 01:17:23.637157 | orchestrator | 2026-03-28 01:17:23.637169 | orchestrator | + set -e 2026-03-28 01:17:23.637179 | orchestrator | + echo 2026-03-28 01:17:23.637192 | orchestrator | + echo '# BOOTSTRAP' 2026-03-28 01:17:23.637206 | orchestrator | + echo 2026-03-28 01:17:23.637244 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-03-28 01:17:23.645115 | orchestrator | + set -e 2026-03-28 01:17:23.645224 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-03-28 01:17:29.904077 | orchestrator | 2026-03-28 01:17:29 | INFO  | It takes a moment until task e568d54f-3c80-46a3-ba4b-a1e444e6de9d (flavor-manager) has been started and output is visible here. 2026-03-28 01:17:41.278405 | orchestrator | 2026-03-28 01:17:35 | INFO  | Flavor SCS-1L-1 created 2026-03-28 01:17:41.278480 | orchestrator | 2026-03-28 01:17:35 | INFO  | Flavor SCS-1L-1-5 created 2026-03-28 01:17:41.278491 | orchestrator | 2026-03-28 01:17:36 | INFO  | Flavor SCS-1V-2 created 2026-03-28 01:17:41.278498 | orchestrator | 2026-03-28 01:17:36 | INFO  | Flavor SCS-1V-2-5 created 2026-03-28 01:17:41.278505 | orchestrator | 2026-03-28 01:17:36 | INFO  | Flavor SCS-1V-4 created 2026-03-28 01:17:41.278512 | orchestrator | 2026-03-28 01:17:36 | INFO  | Flavor SCS-1V-4-10 created 2026-03-28 01:17:41.278519 | orchestrator | 2026-03-28 01:17:37 | INFO  | Flavor SCS-1V-8 created 2026-03-28 01:17:41.278526 | orchestrator | 2026-03-28 01:17:37 | INFO  | Flavor SCS-1V-8-20 created 2026-03-28 01:17:41.278536 | orchestrator | 2026-03-28 01:17:37 | INFO  | Flavor SCS-2V-4 created 2026-03-28 01:17:41.278541 | orchestrator | 2026-03-28 01:17:37 | INFO  | Flavor SCS-2V-4-10 created 2026-03-28 01:17:41.278545 | orchestrator | 2026-03-28 01:17:37 | INFO  | Flavor SCS-2V-8 created 2026-03-28 01:17:41.278549 | orchestrator | 2026-03-28 01:17:38 | INFO  | Flavor SCS-2V-8-20 created 2026-03-28 01:17:41.278552 | orchestrator | 2026-03-28 01:17:38 | INFO  | Flavor SCS-2V-16 created 2026-03-28 01:17:41.278556 | orchestrator | 2026-03-28 01:17:38 | INFO  | Flavor SCS-2V-16-50 created 2026-03-28 01:17:41.278560 | orchestrator | 2026-03-28 01:17:38 | INFO  | Flavor SCS-4V-8 created 2026-03-28 01:17:41.278564 | orchestrator | 2026-03-28 01:17:38 | INFO  | Flavor SCS-4V-8-20 created 2026-03-28 01:17:41.278568 | orchestrator | 2026-03-28 01:17:38 | INFO  | Flavor SCS-4V-16 created 2026-03-28 01:17:41.278581 | orchestrator | 2026-03-28 01:17:38 | INFO  | Flavor SCS-4V-16-50 created 2026-03-28 01:17:41.278588 | orchestrator | 2026-03-28 01:17:39 | INFO  | Flavor SCS-4V-32 created 2026-03-28 01:17:41.278600 | orchestrator | 2026-03-28 01:17:39 | INFO  | Flavor SCS-4V-32-100 created 2026-03-28 01:17:41.278607 | orchestrator | 2026-03-28 01:17:39 | INFO  | Flavor SCS-8V-16 created 2026-03-28 01:17:41.278614 | orchestrator | 2026-03-28 01:17:39 | INFO  | Flavor SCS-8V-16-50 created 2026-03-28 01:17:41.278621 | orchestrator | 2026-03-28 01:17:39 | INFO  | Flavor SCS-8V-32 created 2026-03-28 01:17:41.278627 | orchestrator | 2026-03-28 01:17:39 | INFO  | Flavor SCS-8V-32-100 created 2026-03-28 01:17:41.278633 | orchestrator | 2026-03-28 01:17:40 | INFO  | Flavor SCS-16V-32 created 2026-03-28 01:17:41.278640 | orchestrator | 2026-03-28 01:17:40 | INFO  | Flavor SCS-16V-32-100 created 2026-03-28 01:17:41.278646 | orchestrator | 2026-03-28 01:17:40 | INFO  | Flavor SCS-2V-4-20s created 2026-03-28 01:17:41.278652 | orchestrator | 2026-03-28 01:17:40 | INFO  | Flavor SCS-4V-8-50s created 2026-03-28 01:17:41.278659 | orchestrator | 2026-03-28 01:17:40 | INFO  | Flavor SCS-4V-16-100s created 2026-03-28 01:17:41.278666 | orchestrator | 2026-03-28 01:17:41 | INFO  | Flavor SCS-8V-32-100s created 2026-03-28 01:17:42.696531 | orchestrator | 2026-03-28 01:17:42 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-03-28 01:17:52.879857 | orchestrator | 2026-03-28 01:17:52 | INFO  | Prepare task for execution of bootstrap-basic. 2026-03-28 01:17:52.989303 | orchestrator | 2026-03-28 01:17:52 | INFO  | Task 178679fe-ee65-48fb-a98a-de1855f7f4b1 (bootstrap-basic) was prepared for execution. 2026-03-28 01:17:52.989438 | orchestrator | 2026-03-28 01:17:52 | INFO  | It takes a moment until task 178679fe-ee65-48fb-a98a-de1855f7f4b1 (bootstrap-basic) has been started and output is visible here. 2026-03-28 01:18:45.294222 | orchestrator | 2026-03-28 01:18:45.294326 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-03-28 01:18:45.294344 | orchestrator | 2026-03-28 01:18:45.294359 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-28 01:18:45.294375 | orchestrator | Saturday 28 March 2026 01:17:56 +0000 (0:00:00.128) 0:00:00.128 ******** 2026-03-28 01:18:45.294391 | orchestrator | ok: [localhost] 2026-03-28 01:18:45.294411 | orchestrator | 2026-03-28 01:18:45.294431 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-03-28 01:18:45.294446 | orchestrator | Saturday 28 March 2026 01:17:58 +0000 (0:00:02.191) 0:00:02.320 ******** 2026-03-28 01:18:45.294463 | orchestrator | ok: [localhost] 2026-03-28 01:18:45.294478 | orchestrator | 2026-03-28 01:18:45.294490 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-03-28 01:18:45.294504 | orchestrator | Saturday 28 March 2026 01:18:09 +0000 (0:00:10.853) 0:00:13.174 ******** 2026-03-28 01:18:45.294519 | orchestrator | changed: [localhost] 2026-03-28 01:18:45.294534 | orchestrator | 2026-03-28 01:18:45.294548 | orchestrator | TASK [Create public network] *************************************************** 2026-03-28 01:18:45.294561 | orchestrator | Saturday 28 March 2026 01:18:18 +0000 (0:00:08.405) 0:00:21.580 ******** 2026-03-28 01:18:45.294575 | orchestrator | changed: [localhost] 2026-03-28 01:18:45.294588 | orchestrator | 2026-03-28 01:18:45.294606 | orchestrator | TASK [Set public network to default] ******************************************* 2026-03-28 01:18:45.294621 | orchestrator | Saturday 28 March 2026 01:18:24 +0000 (0:00:06.049) 0:00:27.629 ******** 2026-03-28 01:18:45.294636 | orchestrator | changed: [localhost] 2026-03-28 01:18:45.294651 | orchestrator | 2026-03-28 01:18:45.294667 | orchestrator | TASK [Create public subnet] **************************************************** 2026-03-28 01:18:45.294682 | orchestrator | Saturday 28 March 2026 01:18:31 +0000 (0:00:07.261) 0:00:34.891 ******** 2026-03-28 01:18:45.294697 | orchestrator | changed: [localhost] 2026-03-28 01:18:45.294711 | orchestrator | 2026-03-28 01:18:45.294725 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-03-28 01:18:45.294740 | orchestrator | Saturday 28 March 2026 01:18:36 +0000 (0:00:05.076) 0:00:39.967 ******** 2026-03-28 01:18:45.294756 | orchestrator | changed: [localhost] 2026-03-28 01:18:45.294772 | orchestrator | 2026-03-28 01:18:45.294787 | orchestrator | TASK [Create manager role] ***************************************************** 2026-03-28 01:18:45.294815 | orchestrator | Saturday 28 March 2026 01:18:40 +0000 (0:00:04.219) 0:00:44.187 ******** 2026-03-28 01:18:45.294832 | orchestrator | ok: [localhost] 2026-03-28 01:18:45.294848 | orchestrator | 2026-03-28 01:18:45.294864 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:18:45.294880 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-28 01:18:45.294923 | orchestrator | 2026-03-28 01:18:45.294939 | orchestrator | 2026-03-28 01:18:45.294952 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:18:45.294965 | orchestrator | Saturday 28 March 2026 01:18:45 +0000 (0:00:04.205) 0:00:48.393 ******** 2026-03-28 01:18:45.294977 | orchestrator | =============================================================================== 2026-03-28 01:18:45.294990 | orchestrator | Get volume type LUKS --------------------------------------------------- 10.85s 2026-03-28 01:18:45.295030 | orchestrator | Create volume type LUKS ------------------------------------------------- 8.41s 2026-03-28 01:18:45.295041 | orchestrator | Set public network to default ------------------------------------------- 7.26s 2026-03-28 01:18:45.295052 | orchestrator | Create public network --------------------------------------------------- 6.05s 2026-03-28 01:18:45.295063 | orchestrator | Create public subnet ---------------------------------------------------- 5.08s 2026-03-28 01:18:45.295074 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 4.22s 2026-03-28 01:18:45.295085 | orchestrator | Create manager role ----------------------------------------------------- 4.21s 2026-03-28 01:18:45.295096 | orchestrator | Gathering Facts --------------------------------------------------------- 2.19s 2026-03-28 01:18:47.585628 | orchestrator | 2026-03-28 01:18:47 | INFO  | It takes a moment until task 607f358b-07c2-4a3d-af9f-3d20c2bcbb05 (image-manager) has been started and output is visible here. 2026-03-28 01:19:29.202474 | orchestrator | 2026-03-28 01:18:50 | INFO  | Processing image 'Cirros 0.6.2' 2026-03-28 01:19:29.202613 | orchestrator | 2026-03-28 01:18:50 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-03-28 01:19:29.202634 | orchestrator | 2026-03-28 01:18:50 | INFO  | Importing image Cirros 0.6.2 2026-03-28 01:19:29.202647 | orchestrator | 2026-03-28 01:18:50 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-28 01:19:29.202661 | orchestrator | 2026-03-28 01:18:53 | INFO  | Waiting for image to leave queued state... 2026-03-28 01:19:29.203709 | orchestrator | 2026-03-28 01:18:55 | INFO  | Waiting for import to complete... 2026-03-28 01:19:29.203753 | orchestrator | 2026-03-28 01:19:05 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-03-28 01:19:29.203768 | orchestrator | 2026-03-28 01:19:05 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-03-28 01:19:29.203780 | orchestrator | 2026-03-28 01:19:05 | INFO  | Setting internal_version = 0.6.2 2026-03-28 01:19:29.203791 | orchestrator | 2026-03-28 01:19:05 | INFO  | Setting image_original_user = cirros 2026-03-28 01:19:29.203803 | orchestrator | 2026-03-28 01:19:05 | INFO  | Adding tag os:cirros 2026-03-28 01:19:29.203813 | orchestrator | 2026-03-28 01:19:06 | INFO  | Setting property architecture: x86_64 2026-03-28 01:19:29.203824 | orchestrator | 2026-03-28 01:19:06 | INFO  | Setting property hw_disk_bus: scsi 2026-03-28 01:19:29.203835 | orchestrator | 2026-03-28 01:19:06 | INFO  | Setting property hw_rng_model: virtio 2026-03-28 01:19:29.203847 | orchestrator | 2026-03-28 01:19:06 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-28 01:19:29.203889 | orchestrator | 2026-03-28 01:19:07 | INFO  | Setting property hw_watchdog_action: reset 2026-03-28 01:19:29.203901 | orchestrator | 2026-03-28 01:19:07 | INFO  | Setting property hypervisor_type: qemu 2026-03-28 01:19:29.203928 | orchestrator | 2026-03-28 01:19:07 | INFO  | Setting property os_distro: cirros 2026-03-28 01:19:29.203939 | orchestrator | 2026-03-28 01:19:07 | INFO  | Setting property os_purpose: minimal 2026-03-28 01:19:29.203950 | orchestrator | 2026-03-28 01:19:07 | INFO  | Setting property replace_frequency: never 2026-03-28 01:19:29.203961 | orchestrator | 2026-03-28 01:19:08 | INFO  | Setting property uuid_validity: none 2026-03-28 01:19:29.203972 | orchestrator | 2026-03-28 01:19:08 | INFO  | Setting property provided_until: none 2026-03-28 01:19:29.203983 | orchestrator | 2026-03-28 01:19:08 | INFO  | Setting property image_description: Cirros 2026-03-28 01:19:29.203995 | orchestrator | 2026-03-28 01:19:08 | INFO  | Setting property image_name: Cirros 2026-03-28 01:19:29.204032 | orchestrator | 2026-03-28 01:19:08 | INFO  | Setting property internal_version: 0.6.2 2026-03-28 01:19:29.204043 | orchestrator | 2026-03-28 01:19:09 | INFO  | Setting property image_original_user: cirros 2026-03-28 01:19:29.204054 | orchestrator | 2026-03-28 01:19:09 | INFO  | Setting property os_version: 0.6.2 2026-03-28 01:19:29.204067 | orchestrator | 2026-03-28 01:19:09 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-28 01:19:29.204080 | orchestrator | 2026-03-28 01:19:09 | INFO  | Setting property image_build_date: 2023-05-30 2026-03-28 01:19:29.204091 | orchestrator | 2026-03-28 01:19:09 | INFO  | Checking status of 'Cirros 0.6.2' 2026-03-28 01:19:29.204102 | orchestrator | 2026-03-28 01:19:09 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-03-28 01:19:29.204118 | orchestrator | 2026-03-28 01:19:09 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-03-28 01:19:29.204129 | orchestrator | 2026-03-28 01:19:10 | INFO  | Processing image 'Cirros 0.6.3' 2026-03-28 01:19:29.204141 | orchestrator | 2026-03-28 01:19:10 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-03-28 01:19:29.204153 | orchestrator | 2026-03-28 01:19:10 | INFO  | Importing image Cirros 0.6.3 2026-03-28 01:19:29.204164 | orchestrator | 2026-03-28 01:19:10 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-28 01:19:29.204175 | orchestrator | 2026-03-28 01:19:11 | INFO  | Waiting for image to leave queued state... 2026-03-28 01:19:29.204185 | orchestrator | 2026-03-28 01:19:14 | INFO  | Waiting for import to complete... 2026-03-28 01:19:29.204219 | orchestrator | 2026-03-28 01:19:24 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-03-28 01:19:29.204231 | orchestrator | 2026-03-28 01:19:24 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-03-28 01:19:29.204242 | orchestrator | 2026-03-28 01:19:24 | INFO  | Setting internal_version = 0.6.3 2026-03-28 01:19:29.204253 | orchestrator | 2026-03-28 01:19:24 | INFO  | Setting image_original_user = cirros 2026-03-28 01:19:29.204264 | orchestrator | 2026-03-28 01:19:24 | INFO  | Adding tag os:cirros 2026-03-28 01:19:29.204275 | orchestrator | 2026-03-28 01:19:24 | INFO  | Setting property architecture: x86_64 2026-03-28 01:19:29.204286 | orchestrator | 2026-03-28 01:19:24 | INFO  | Setting property hw_disk_bus: scsi 2026-03-28 01:19:29.204296 | orchestrator | 2026-03-28 01:19:25 | INFO  | Setting property hw_rng_model: virtio 2026-03-28 01:19:29.204307 | orchestrator | 2026-03-28 01:19:25 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-28 01:19:29.204318 | orchestrator | 2026-03-28 01:19:25 | INFO  | Setting property hw_watchdog_action: reset 2026-03-28 01:19:29.204329 | orchestrator | 2026-03-28 01:19:25 | INFO  | Setting property hypervisor_type: qemu 2026-03-28 01:19:29.204340 | orchestrator | 2026-03-28 01:19:26 | INFO  | Setting property os_distro: cirros 2026-03-28 01:19:29.204350 | orchestrator | 2026-03-28 01:19:26 | INFO  | Setting property os_purpose: minimal 2026-03-28 01:19:29.204361 | orchestrator | 2026-03-28 01:19:26 | INFO  | Setting property replace_frequency: never 2026-03-28 01:19:29.204372 | orchestrator | 2026-03-28 01:19:26 | INFO  | Setting property uuid_validity: none 2026-03-28 01:19:29.204383 | orchestrator | 2026-03-28 01:19:26 | INFO  | Setting property provided_until: none 2026-03-28 01:19:29.204394 | orchestrator | 2026-03-28 01:19:27 | INFO  | Setting property image_description: Cirros 2026-03-28 01:19:29.204414 | orchestrator | 2026-03-28 01:19:27 | INFO  | Setting property image_name: Cirros 2026-03-28 01:19:29.204425 | orchestrator | 2026-03-28 01:19:27 | INFO  | Setting property internal_version: 0.6.3 2026-03-28 01:19:29.204436 | orchestrator | 2026-03-28 01:19:27 | INFO  | Setting property image_original_user: cirros 2026-03-28 01:19:29.204447 | orchestrator | 2026-03-28 01:19:27 | INFO  | Setting property os_version: 0.6.3 2026-03-28 01:19:29.204458 | orchestrator | 2026-03-28 01:19:28 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-28 01:19:29.204469 | orchestrator | 2026-03-28 01:19:28 | INFO  | Setting property image_build_date: 2024-09-26 2026-03-28 01:19:29.204480 | orchestrator | 2026-03-28 01:19:28 | INFO  | Checking status of 'Cirros 0.6.3' 2026-03-28 01:19:29.204491 | orchestrator | 2026-03-28 01:19:28 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-03-28 01:19:29.204502 | orchestrator | 2026-03-28 01:19:28 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-03-28 01:19:29.515694 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-03-28 01:19:31.616157 | orchestrator | 2026-03-28 01:19:31 | INFO  | date: 2026-03-27 2026-03-28 01:19:31.616304 | orchestrator | 2026-03-28 01:19:31 | INFO  | image: octavia-amphora-haproxy-2025.1.20260327.qcow2 2026-03-28 01:19:31.616482 | orchestrator | 2026-03-28 01:19:31 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2025.1.20260327.qcow2 2026-03-28 01:19:31.617094 | orchestrator | 2026-03-28 01:19:31 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2025.1.20260327.qcow2.CHECKSUM 2026-03-28 01:19:31.828195 | orchestrator | 2026-03-28 01:19:31 | INFO  | checksum: f5fccc4305cf0c46e7c480067abe5cd1b4f13cdb0075b3ddfa707e2034d37b4a 2026-03-28 01:19:31.921662 | orchestrator | 2026-03-28 01:19:31 | INFO  | It takes a moment until task bcf21c31-1a47-4c40-a5c1-1adc2e2e58f9 (image-manager) has been started and output is visible here. 2026-03-28 01:20:43.075452 | orchestrator | 2026-03-28 01:19:34 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-03-27' 2026-03-28 01:20:43.075569 | orchestrator | 2026-03-28 01:19:34 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2025.1.20260327.qcow2: 200 2026-03-28 01:20:43.075588 | orchestrator | 2026-03-28 01:19:34 | INFO  | Importing image OpenStack Octavia Amphora 2026-03-27 2026-03-28 01:20:43.075601 | orchestrator | 2026-03-28 01:19:34 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2025.1.20260327.qcow2 2026-03-28 01:20:43.075613 | orchestrator | 2026-03-28 01:19:35 | INFO  | Waiting for image to leave queued state... 2026-03-28 01:20:43.075625 | orchestrator | 2026-03-28 01:19:37 | INFO  | Waiting for import to complete... 2026-03-28 01:20:43.075636 | orchestrator | 2026-03-28 01:19:48 | INFO  | Waiting for import to complete... 2026-03-28 01:20:43.075647 | orchestrator | 2026-03-28 01:19:58 | INFO  | Waiting for import to complete... 2026-03-28 01:20:43.075659 | orchestrator | 2026-03-28 01:20:08 | INFO  | Waiting for import to complete... 2026-03-28 01:20:43.075673 | orchestrator | 2026-03-28 01:20:18 | INFO  | Waiting for import to complete... 2026-03-28 01:20:43.075684 | orchestrator | 2026-03-28 01:20:28 | INFO  | Waiting for import to complete... 2026-03-28 01:20:43.075695 | orchestrator | 2026-03-28 01:20:38 | INFO  | Import of 'OpenStack Octavia Amphora 2026-03-27' successfully completed, reloading images 2026-03-28 01:20:43.075735 | orchestrator | 2026-03-28 01:20:38 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2026-03-27' 2026-03-28 01:20:43.075747 | orchestrator | 2026-03-28 01:20:38 | INFO  | Setting internal_version = 2026-03-27 2026-03-28 01:20:43.075758 | orchestrator | 2026-03-28 01:20:38 | INFO  | Setting image_original_user = ubuntu 2026-03-28 01:20:43.075769 | orchestrator | 2026-03-28 01:20:38 | INFO  | Adding tag amphora 2026-03-28 01:20:43.075780 | orchestrator | 2026-03-28 01:20:38 | INFO  | Adding tag os:ubuntu 2026-03-28 01:20:43.075791 | orchestrator | 2026-03-28 01:20:39 | INFO  | Setting property architecture: x86_64 2026-03-28 01:20:43.075802 | orchestrator | 2026-03-28 01:20:39 | INFO  | Setting property hw_disk_bus: scsi 2026-03-28 01:20:43.075843 | orchestrator | 2026-03-28 01:20:39 | INFO  | Setting property hw_rng_model: virtio 2026-03-28 01:20:43.075863 | orchestrator | 2026-03-28 01:20:39 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-28 01:20:43.075880 | orchestrator | 2026-03-28 01:20:39 | INFO  | Setting property hw_watchdog_action: reset 2026-03-28 01:20:43.075892 | orchestrator | 2026-03-28 01:20:40 | INFO  | Setting property hypervisor_type: qemu 2026-03-28 01:20:43.075902 | orchestrator | 2026-03-28 01:20:40 | INFO  | Setting property os_distro: ubuntu 2026-03-28 01:20:43.075915 | orchestrator | 2026-03-28 01:20:40 | INFO  | Setting property replace_frequency: quarterly 2026-03-28 01:20:43.075928 | orchestrator | 2026-03-28 01:20:40 | INFO  | Setting property uuid_validity: last-1 2026-03-28 01:20:43.075941 | orchestrator | 2026-03-28 01:20:40 | INFO  | Setting property provided_until: none 2026-03-28 01:20:43.075954 | orchestrator | 2026-03-28 01:20:41 | INFO  | Setting property os_purpose: network 2026-03-28 01:20:43.075967 | orchestrator | 2026-03-28 01:20:41 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2026-03-28 01:20:43.075996 | orchestrator | 2026-03-28 01:20:41 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2026-03-28 01:20:43.076009 | orchestrator | 2026-03-28 01:20:41 | INFO  | Setting property internal_version: 2026-03-27 2026-03-28 01:20:43.076022 | orchestrator | 2026-03-28 01:20:41 | INFO  | Setting property image_original_user: ubuntu 2026-03-28 01:20:43.076035 | orchestrator | 2026-03-28 01:20:42 | INFO  | Setting property os_version: 2026-03-27 2026-03-28 01:20:43.076048 | orchestrator | 2026-03-28 01:20:42 | INFO  | Setting property image_source: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2025.1.20260327.qcow2 2026-03-28 01:20:43.076062 | orchestrator | 2026-03-28 01:20:42 | INFO  | Setting property image_build_date: 2026-03-27 2026-03-28 01:20:43.076075 | orchestrator | 2026-03-28 01:20:42 | INFO  | Checking status of 'OpenStack Octavia Amphora 2026-03-27' 2026-03-28 01:20:43.076088 | orchestrator | 2026-03-28 01:20:42 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2026-03-27' 2026-03-28 01:20:43.076146 | orchestrator | 2026-03-28 01:20:42 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-03-28 01:20:43.076181 | orchestrator | 2026-03-28 01:20:42 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-03-28 01:20:43.076201 | orchestrator | 2026-03-28 01:20:42 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-03-28 01:20:43.076220 | orchestrator | 2026-03-28 01:20:42 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-03-28 01:20:43.597970 | orchestrator | ok: Runtime: 0:03:20.179724 2026-03-28 01:20:43.627905 | 2026-03-28 01:20:43.628084 | TASK [Run checks] 2026-03-28 01:20:44.396216 | orchestrator | + set -e 2026-03-28 01:20:44.396412 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-28 01:20:44.396438 | orchestrator | ++ export INTERACTIVE=false 2026-03-28 01:20:44.396458 | orchestrator | ++ INTERACTIVE=false 2026-03-28 01:20:44.396469 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-28 01:20:44.396478 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-28 01:20:44.396488 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-28 01:20:44.397504 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-28 01:20:44.406069 | orchestrator | 2026-03-28 01:20:44.406167 | orchestrator | # CHECK 2026-03-28 01:20:44.406179 | orchestrator | 2026-03-28 01:20:44.406190 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-28 01:20:44.406204 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-28 01:20:44.406213 | orchestrator | + echo 2026-03-28 01:20:44.406222 | orchestrator | + echo '# CHECK' 2026-03-28 01:20:44.406231 | orchestrator | + echo 2026-03-28 01:20:44.406246 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-28 01:20:44.406663 | orchestrator | ++ semver latest 5.0.0 2026-03-28 01:20:44.464353 | orchestrator | 2026-03-28 01:20:44.464459 | orchestrator | ## Containers @ testbed-manager 2026-03-28 01:20:44.464473 | orchestrator | 2026-03-28 01:20:44.464488 | orchestrator | + [[ -1 -eq -1 ]] 2026-03-28 01:20:44.464499 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-28 01:20:44.464510 | orchestrator | + echo 2026-03-28 01:20:44.464522 | orchestrator | + echo '## Containers @ testbed-manager' 2026-03-28 01:20:44.464534 | orchestrator | + echo 2026-03-28 01:20:44.464545 | orchestrator | + osism container testbed-manager ps 2026-03-28 01:20:45.631781 | orchestrator | 2026-03-28 01:20:45 | INFO  | Creating empty known_hosts file: /share/known_hosts 2026-03-28 01:20:46.042102 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-28 01:20:46.042193 | orchestrator | df19d20407d9 registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_blackbox_exporter 2026-03-28 01:20:46.042208 | orchestrator | 33d4295a7d42 registry.osism.tech/kolla/prometheus-alertmanager:2025.1 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_alertmanager 2026-03-28 01:20:46.042216 | orchestrator | 297887c8013e registry.osism.tech/kolla/prometheus-cadvisor:2025.1 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_cadvisor 2026-03-28 01:20:46.042227 | orchestrator | aeb269dcba5c registry.osism.tech/kolla/prometheus-node-exporter:2025.1 "dumb-init --single-…" 18 minutes ago Up 18 minutes prometheus_node_exporter 2026-03-28 01:20:46.042239 | orchestrator | 7c059f4d434b registry.osism.tech/kolla/prometheus-server:2025.1 "dumb-init --single-…" 18 minutes ago Up 18 minutes prometheus_server 2026-03-28 01:20:46.042246 | orchestrator | fd59f43dbffb registry.osism.tech/osism/cephclient:reef "/usr/bin/dumb-init …" 20 minutes ago Up 20 minutes cephclient 2026-03-28 01:20:46.042253 | orchestrator | 784e89e6fb27 registry.osism.tech/kolla/cron:2025.1 "dumb-init --single-…" 32 minutes ago Up 32 minutes cron 2026-03-28 01:20:46.042260 | orchestrator | 45b7dabbe218 registry.osism.tech/kolla/kolla-toolbox:2025.1 "dumb-init --single-…" 32 minutes ago Up 32 minutes kolla_toolbox 2026-03-28 01:20:46.042286 | orchestrator | de450dba4edf registry.osism.tech/kolla/fluentd:2025.1 "dumb-init --single-…" 33 minutes ago Up 33 minutes fluentd 2026-03-28 01:20:46.042293 | orchestrator | a3ad6f602950 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 34 minutes ago Up 33 minutes (healthy) 80/tcp phpmyadmin 2026-03-28 01:20:46.042300 | orchestrator | 4bebaae33585 registry.osism.tech/osism/openstackclient:2025.1 "/usr/bin/dumb-init …" 35 minutes ago Up 34 minutes openstackclient 2026-03-28 01:20:46.042306 | orchestrator | ab0675232bee registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 35 minutes ago Up 34 minutes (healthy) 8080/tcp homer 2026-03-28 01:20:46.042313 | orchestrator | 5b79250afcb5 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 58 minutes ago Up 58 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2026-03-28 01:20:46.042320 | orchestrator | cfdd8d2bd690 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" About an hour ago Up 41 minutes (healthy) manager-inventory_reconciler-1 2026-03-28 01:20:46.042339 | orchestrator | fff18ef171d0 registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" About an hour ago Up 41 minutes (healthy) ceph-ansible 2026-03-28 01:20:46.042371 | orchestrator | e1401a90ca21 registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" About an hour ago Up 41 minutes (healthy) osism-ansible 2026-03-28 01:20:46.042379 | orchestrator | ad4b6015046d registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" About an hour ago Up 41 minutes (healthy) osism-kubernetes 2026-03-28 01:20:46.042385 | orchestrator | 14a2819c64d8 registry.osism.tech/osism/kolla-ansible:2025.1 "/entrypoint.sh osis…" About an hour ago Up 41 minutes (healthy) kolla-ansible 2026-03-28 01:20:46.042856 | orchestrator | b813b70080b1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" About an hour ago Up 42 minutes (healthy) 8000/tcp manager-ara-server-1 2026-03-28 01:20:46.042878 | orchestrator | ee30e0140d79 registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" About an hour ago Up 42 minutes 192.168.16.5:3000->3000/tcp osism-frontend 2026-03-28 01:20:46.042889 | orchestrator | 0e141ca38a4c registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" About an hour ago Up 42 minutes (healthy) 3306/tcp manager-mariadb-1 2026-03-28 01:20:46.042898 | orchestrator | 870beee5f216 registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" About an hour ago Up 42 minutes (healthy) osismclient 2026-03-28 01:20:46.042905 | orchestrator | eb58c074fd21 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 42 minutes (healthy) manager-beat-1 2026-03-28 01:20:46.042922 | orchestrator | d3d8cdb89d99 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" About an hour ago Up 42 minutes (healthy) 6379/tcp manager-redis-1 2026-03-28 01:20:46.042928 | orchestrator | 340a1dd5c8a5 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 42 minutes (healthy) manager-listener-1 2026-03-28 01:20:46.042935 | orchestrator | 408f8fb8f862 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 42 minutes (healthy) manager-openstack-1 2026-03-28 01:20:46.042942 | orchestrator | 983313cfdf3b registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 42 minutes (healthy) manager-flower-1 2026-03-28 01:20:46.042948 | orchestrator | 8ee9d19e8bee registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 42 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2026-03-28 01:20:46.042954 | orchestrator | 655b8cac7dca registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" About an hour ago Up About an hour (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2026-03-28 01:20:46.207809 | orchestrator | 2026-03-28 01:20:46.207940 | orchestrator | ## Images @ testbed-manager 2026-03-28 01:20:46.207958 | orchestrator | 2026-03-28 01:20:46.207972 | orchestrator | + echo 2026-03-28 01:20:46.208021 | orchestrator | + echo '## Images @ testbed-manager' 2026-03-28 01:20:46.208035 | orchestrator | + echo 2026-03-28 01:20:46.208051 | orchestrator | + osism container testbed-manager images 2026-03-28 01:20:47.808019 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-28 01:20:47.808179 | orchestrator | registry.osism.tech/osism/osism-ansible latest 797fb5579132 About an hour ago 638MB 2026-03-28 01:20:47.808210 | orchestrator | registry.osism.tech/osism/kolla-ansible 2025.1 6adfdad4da5a About an hour ago 635MB 2026-03-28 01:20:47.808230 | orchestrator | registry.osism.tech/osism/ceph-ansible reef da8d7357ca6a About an hour ago 585MB 2026-03-28 01:20:47.808251 | orchestrator | registry.osism.tech/osism/osism-kubernetes latest 675eca99ead0 About an hour ago 1.24GB 2026-03-28 01:20:47.808270 | orchestrator | registry.osism.tech/osism/osism latest de9a25a40c10 About an hour ago 406MB 2026-03-28 01:20:47.808289 | orchestrator | registry.osism.tech/osism/osism-frontend latest 1cf19984e75b About an hour ago 212MB 2026-03-28 01:20:47.808309 | orchestrator | registry.osism.tech/osism/inventory-reconciler latest a982a5a9270e About an hour ago 357MB 2026-03-28 01:20:47.808321 | orchestrator | registry.osism.tech/kolla/fluentd 2025.1 ec0ad576226f 2 hours ago 590MB 2026-03-28 01:20:47.808332 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2025.1 257d2db70ac0 2 hours ago 683MB 2026-03-28 01:20:47.808344 | orchestrator | registry.osism.tech/kolla/cron 2025.1 d7b2ad1eef56 2 hours ago 277MB 2026-03-28 01:20:47.808355 | orchestrator | registry.osism.tech/kolla/prometheus-alertmanager 2025.1 40191a3580af 2 hours ago 415MB 2026-03-28 01:20:47.808366 | orchestrator | registry.osism.tech/kolla/prometheus-blackbox-exporter 2025.1 502b29c05545 2 hours ago 319MB 2026-03-28 01:20:47.808377 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2025.1 a1ccb6a09cb0 2 hours ago 368MB 2026-03-28 01:20:47.808434 | orchestrator | registry.osism.tech/kolla/prometheus-server 2025.1 87e8add6b675 2 hours ago 860MB 2026-03-28 01:20:47.808458 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2025.1 8773d2fa8fd1 2 hours ago 317MB 2026-03-28 01:20:47.808477 | orchestrator | registry.osism.tech/osism/openstackclient 2025.1 a68a9cb32096 21 hours ago 213MB 2026-03-28 01:20:47.808493 | orchestrator | registry.osism.tech/osism/cephclient reef df5bb5c5d20c 21 hours ago 453MB 2026-03-28 01:20:47.808512 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.7-alpine e08bd8d5a677 8 weeks ago 41.4MB 2026-03-28 01:20:47.808530 | orchestrator | registry.osism.tech/osism/homer v25.10.1 ea34b371c716 3 months ago 11.5MB 2026-03-28 01:20:47.808549 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.4 70745dd8f1d0 4 months ago 334MB 2026-03-28 01:20:47.808561 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 5 months ago 742MB 2026-03-28 01:20:47.808572 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 7 months ago 275MB 2026-03-28 01:20:47.808585 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 8 months ago 226MB 2026-03-28 01:20:47.808603 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 21 months ago 146MB 2026-03-28 01:20:47.989879 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-28 01:20:47.990222 | orchestrator | ++ semver latest 5.0.0 2026-03-28 01:20:48.051567 | orchestrator | 2026-03-28 01:20:48.051670 | orchestrator | ## Containers @ testbed-node-0 2026-03-28 01:20:48.051683 | orchestrator | 2026-03-28 01:20:48.051692 | orchestrator | + [[ -1 -eq -1 ]] 2026-03-28 01:20:48.051701 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-28 01:20:48.051711 | orchestrator | + echo 2026-03-28 01:20:48.051721 | orchestrator | + echo '## Containers @ testbed-node-0' 2026-03-28 01:20:48.051731 | orchestrator | + echo 2026-03-28 01:20:48.051740 | orchestrator | + osism container testbed-node-0 ps 2026-03-28 01:20:49.754900 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-28 01:20:49.755068 | orchestrator | 777eec8f80ea registry.osism.tech/kolla/octavia-worker:2025.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-03-28 01:20:49.755090 | orchestrator | ad5649bb9df1 registry.osism.tech/kolla/octavia-housekeeping:2025.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-03-28 01:20:49.755103 | orchestrator | 783486cb7867 registry.osism.tech/kolla/octavia-health-manager:2025.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-03-28 01:20:49.755135 | orchestrator | 1878ff7ec95a registry.osism.tech/kolla/octavia-driver-agent:2025.1 "dumb-init --single-…" 5 minutes ago Up 4 minutes octavia_driver_agent 2026-03-28 01:20:49.755147 | orchestrator | 18fd6718b5df registry.osism.tech/kolla/octavia-api:2025.1 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2026-03-28 01:20:49.755158 | orchestrator | c3b64cc1ba18 registry.osism.tech/kolla/magnum-conductor:2025.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) magnum_conductor 2026-03-28 01:20:49.755169 | orchestrator | 118957f225f4 registry.osism.tech/kolla/nova-novncproxy:2025.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2026-03-28 01:20:49.755181 | orchestrator | 5bf4f29fa473 registry.osism.tech/kolla/nova-conductor:2025.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_conductor 2026-03-28 01:20:49.755211 | orchestrator | 6d522fafb167 registry.osism.tech/kolla/magnum-api:2025.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) magnum_api 2026-03-28 01:20:49.755223 | orchestrator | d60de8e17d66 registry.osism.tech/kolla/placement-api:2025.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) placement_api 2026-03-28 01:20:49.755234 | orchestrator | 46d14a8c6a82 registry.osism.tech/kolla/grafana:2025.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes grafana 2026-03-28 01:20:49.755245 | orchestrator | f6d2503e76a3 registry.osism.tech/kolla/designate-worker:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_worker 2026-03-28 01:20:49.755257 | orchestrator | 09fdc36b00c6 registry.osism.tech/kolla/designate-mdns:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_mdns 2026-03-28 01:20:49.755268 | orchestrator | 5de390d9f234 registry.osism.tech/kolla/designate-producer:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_producer 2026-03-28 01:20:49.755279 | orchestrator | 9afe6c348924 registry.osism.tech/kolla/designate-central:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_central 2026-03-28 01:20:49.755290 | orchestrator | 7881488351c6 registry.osism.tech/kolla/designate-api:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_api 2026-03-28 01:20:49.755301 | orchestrator | 1a142213b409 registry.osism.tech/kolla/designate-backend-bind9:2025.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) designate_backend_bind9 2026-03-28 01:20:49.755312 | orchestrator | 1267ec4faa67 registry.osism.tech/kolla/nova-api:2025.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) nova_metadata 2026-03-28 01:20:49.755323 | orchestrator | 2696c4b54645 registry.osism.tech/kolla/nova-api:2025.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) nova_api 2026-03-28 01:20:49.755334 | orchestrator | f0b9853f75ec registry.osism.tech/kolla/barbican-worker:2025.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_worker 2026-03-28 01:20:49.755345 | orchestrator | 7a9d8cb0ee33 registry.osism.tech/kolla/nova-scheduler:2025.1 "dumb-init --single-…" 12 minutes ago Up 10 minutes (healthy) nova_scheduler 2026-03-28 01:20:49.755378 | orchestrator | c01d26bb6f2b registry.osism.tech/kolla/neutron-server:2025.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) neutron_server 2026-03-28 01:20:49.755397 | orchestrator | d2af2b198ecb registry.osism.tech/kolla/barbican-keystone-listener:2025.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_keystone_listener 2026-03-28 01:20:49.755408 | orchestrator | 7ccfaab364be registry.osism.tech/kolla/barbican-api:2025.1 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) barbican_api 2026-03-28 01:20:49.755419 | orchestrator | aa01e5c5cfa3 registry.osism.tech/kolla/cinder-backup:2025.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_backup 2026-03-28 01:20:49.755436 | orchestrator | b7e4434e603d registry.osism.tech/kolla/cinder-volume:2025.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_volume 2026-03-28 01:20:49.755448 | orchestrator | 707eea0b036d registry.osism.tech/kolla/cinder-scheduler:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) cinder_scheduler 2026-03-28 01:20:49.755459 | orchestrator | 39060a487030 registry.osism.tech/kolla/glance-api:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) glance_api 2026-03-28 01:20:49.755478 | orchestrator | a4dc55b6ba0b registry.osism.tech/kolla/cinder-api:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) cinder_api 2026-03-28 01:20:49.755489 | orchestrator | 946338695a8f registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_elasticsearch_exporter 2026-03-28 01:20:49.755501 | orchestrator | c12a6ffa2183 registry.osism.tech/kolla/prometheus-cadvisor:2025.1 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_cadvisor 2026-03-28 01:20:49.755512 | orchestrator | 6067f149c936 registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_memcached_exporter 2026-03-28 01:20:49.755523 | orchestrator | 320c2ca7495d registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1 "dumb-init --single-…" 18 minutes ago Up 17 minutes prometheus_mysqld_exporter 2026-03-28 01:20:49.755534 | orchestrator | edd733b3cc61 registry.osism.tech/kolla/prometheus-node-exporter:2025.1 "dumb-init --single-…" 18 minutes ago Up 18 minutes prometheus_node_exporter 2026-03-28 01:20:49.755545 | orchestrator | d2d584e68d72 registry.osism.tech/kolla/keystone:2025.1 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone 2026-03-28 01:20:49.755556 | orchestrator | e78415160cff registry.osism.tech/kolla/keystone-fernet:2025.1 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_fernet 2026-03-28 01:20:49.755567 | orchestrator | 80c8c9c24d7f registry.osism.tech/kolla/keystone-ssh:2025.1 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_ssh 2026-03-28 01:20:49.755578 | orchestrator | 26cb1d229baa registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 19 minutes ago Up 19 minutes ceph-mgr-testbed-node-0 2026-03-28 01:20:49.755589 | orchestrator | 63a6d1362306 registry.osism.tech/kolla/horizon:2025.1 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) horizon 2026-03-28 01:20:49.755600 | orchestrator | d374f0016db1 registry.osism.tech/kolla/mariadb-server:2025.1 "dumb-init -- kolla_…" 21 minutes ago Up 21 minutes (healthy) mariadb 2026-03-28 01:20:49.755611 | orchestrator | 42e567206054 registry.osism.tech/kolla/opensearch-dashboards:2025.1 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch_dashboards 2026-03-28 01:20:49.755623 | orchestrator | 96a2211d1890 registry.osism.tech/kolla/opensearch:2025.1 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) opensearch 2026-03-28 01:20:49.755634 | orchestrator | b4a6a61f73ce registry.osism.tech/kolla/keepalived:2025.1 "dumb-init --single-…" 24 minutes ago Up 24 minutes keepalived 2026-03-28 01:20:49.755645 | orchestrator | 0ff8fcc6069e registry.osism.tech/kolla/proxysql:2025.1 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) proxysql 2026-03-28 01:20:49.755670 | orchestrator | 9c91121e6fbe registry.osism.tech/kolla/haproxy:2025.1 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) haproxy 2026-03-28 01:20:49.755681 | orchestrator | 7299c15733b4 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 25 minutes ago Up 25 minutes ceph-crash-testbed-node-0 2026-03-28 01:20:49.755698 | orchestrator | 007dc439286d registry.osism.tech/kolla/ovn-northd:2025.1 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_northd 2026-03-28 01:20:49.755716 | orchestrator | af1f7d4bdc55 registry.osism.tech/kolla/ovn-sb-db-relay:2025.1 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_sb_db_relay_1 2026-03-28 01:20:49.755727 | orchestrator | 7f6e2985b435 registry.osism.tech/kolla/ovn-sb-db-server:2025.1 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_sb_db 2026-03-28 01:20:49.755738 | orchestrator | ecc3205b83be registry.osism.tech/kolla/ovn-nb-db-server:2025.1 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_nb_db 2026-03-28 01:20:49.755749 | orchestrator | 9deabbb35d8f registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 30 minutes ago Up 30 minutes ceph-mon-testbed-node-0 2026-03-28 01:20:49.755760 | orchestrator | cfe31f3b1616 registry.osism.tech/kolla/ovn-controller:2025.1 "dumb-init --single-…" 30 minutes ago Up 30 minutes ovn_controller 2026-03-28 01:20:49.755772 | orchestrator | 1c048a027b43 registry.osism.tech/kolla/rabbitmq:2025.1 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) rabbitmq 2026-03-28 01:20:49.755782 | orchestrator | 4a16d4f37b14 registry.osism.tech/kolla/openvswitch-vswitchd:2025.1 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) openvswitch_vswitchd 2026-03-28 01:20:49.755794 | orchestrator | d0f6d3b3f12c registry.osism.tech/kolla/openvswitch-db-server:2025.1 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) openvswitch_db 2026-03-28 01:20:49.755804 | orchestrator | ab9380fd78e9 registry.osism.tech/kolla/redis-sentinel:2025.1 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) redis_sentinel 2026-03-28 01:20:49.755875 | orchestrator | 943cfef7956b registry.osism.tech/kolla/redis:2025.1 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) redis 2026-03-28 01:20:49.755887 | orchestrator | d216eba90254 registry.osism.tech/kolla/memcached:2025.1 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) memcached 2026-03-28 01:20:49.755898 | orchestrator | 7a486304927d registry.osism.tech/kolla/cron:2025.1 "dumb-init --single-…" 32 minutes ago Up 32 minutes cron 2026-03-28 01:20:49.755909 | orchestrator | b73c5bf73e4a registry.osism.tech/kolla/kolla-toolbox:2025.1 "dumb-init --single-…" 33 minutes ago Up 33 minutes kolla_toolbox 2026-03-28 01:20:49.755920 | orchestrator | ae5938d02a5f registry.osism.tech/kolla/fluentd:2025.1 "dumb-init --single-…" 34 minutes ago Up 34 minutes fluentd 2026-03-28 01:20:49.936656 | orchestrator | 2026-03-28 01:20:49.936769 | orchestrator | ## Images @ testbed-node-0 2026-03-28 01:20:49.936785 | orchestrator | 2026-03-28 01:20:49.936797 | orchestrator | + echo 2026-03-28 01:20:49.936858 | orchestrator | + echo '## Images @ testbed-node-0' 2026-03-28 01:20:49.936913 | orchestrator | + echo 2026-03-28 01:20:49.936926 | orchestrator | + osism container testbed-node-0 images 2026-03-28 01:20:51.625494 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-28 01:20:51.625583 | orchestrator | registry.osism.tech/kolla/opensearch 2025.1 f23a473dee09 2 hours ago 1.57GB 2026-03-28 01:20:51.625592 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2025.1 34b1b2855d6c 2 hours ago 1.54GB 2026-03-28 01:20:51.625598 | orchestrator | registry.osism.tech/kolla/memcached 2025.1 a2aedec6ec4d 2 hours ago 277MB 2026-03-28 01:20:51.625603 | orchestrator | registry.osism.tech/kolla/haproxy 2025.1 9fcb0ecf3f54 2 hours ago 285MB 2026-03-28 01:20:51.625609 | orchestrator | registry.osism.tech/kolla/fluentd 2025.1 ec0ad576226f 2 hours ago 590MB 2026-03-28 01:20:51.625614 | orchestrator | registry.osism.tech/kolla/rabbitmq 2025.1 318460cdffbc 2 hours ago 350MB 2026-03-28 01:20:51.625640 | orchestrator | registry.osism.tech/kolla/grafana 2025.1 c65a025a8533 2 hours ago 1.04GB 2026-03-28 01:20:51.625646 | orchestrator | registry.osism.tech/kolla/keepalived 2025.1 7cea4a3318a5 2 hours ago 288MB 2026-03-28 01:20:51.625651 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2025.1 257d2db70ac0 2 hours ago 683MB 2026-03-28 01:20:51.625656 | orchestrator | registry.osism.tech/kolla/proxysql 2025.1 2eea1d3fab86 2 hours ago 427MB 2026-03-28 01:20:51.625662 | orchestrator | registry.osism.tech/kolla/cron 2025.1 d7b2ad1eef56 2 hours ago 277MB 2026-03-28 01:20:51.625667 | orchestrator | registry.osism.tech/kolla/mariadb-server 2025.1 e8ace4d61cba 2 hours ago 463MB 2026-03-28 01:20:51.625672 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2025.1 db2de2398535 2 hours ago 303MB 2026-03-28 01:20:51.625677 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2025.1 9a366225a743 2 hours ago 309MB 2026-03-28 01:20:51.625682 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2025.1 d318aed80f13 2 hours ago 312MB 2026-03-28 01:20:51.625687 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2025.1 a1ccb6a09cb0 2 hours ago 368MB 2026-03-28 01:20:51.625692 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2025.1 8773d2fa8fd1 2 hours ago 317MB 2026-03-28 01:20:51.625697 | orchestrator | registry.osism.tech/kolla/horizon 2025.1 8018db18e8ff 2 hours ago 1.2GB 2026-03-28 01:20:51.625702 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2025.1 03a5f4ad1227 2 hours ago 293MB 2026-03-28 01:20:51.625707 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2025.1 6b3cacdc117d 2 hours ago 284MB 2026-03-28 01:20:51.625712 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2025.1 19794e56d254 2 hours ago 293MB 2026-03-28 01:20:51.625717 | orchestrator | registry.osism.tech/kolla/redis 2025.1 0fbfe81f63d7 2 hours ago 284MB 2026-03-28 01:20:51.625722 | orchestrator | registry.osism.tech/kolla/keystone 2025.1 730312f6cfac 2 hours ago 1.09GB 2026-03-28 01:20:51.625727 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2025.1 855d416f8ca2 2 hours ago 1.06GB 2026-03-28 01:20:51.625740 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2025.1 6d866228e01b 2 hours ago 1.05GB 2026-03-28 01:20:51.625747 | orchestrator | registry.osism.tech/kolla/cinder-api 2025.1 adc478af8a76 2 hours ago 1.43GB 2026-03-28 01:20:51.625752 | orchestrator | registry.osism.tech/kolla/cinder-backup 2025.1 9cc74e3d2ba3 2 hours ago 1.44GB 2026-03-28 01:20:51.625758 | orchestrator | registry.osism.tech/kolla/cinder-volume 2025.1 f0c4321acd9c 2 hours ago 1.79GB 2026-03-28 01:20:51.625763 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2025.1 16d5ac98152c 2 hours ago 1.43GB 2026-03-28 01:20:51.625768 | orchestrator | registry.osism.tech/kolla/nova-conductor 2025.1 770e900dc0dd 2 hours ago 1.23GB 2026-03-28 01:20:51.625773 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2025.1 1d4bbfd9e3e8 2 hours ago 1.23GB 2026-03-28 01:20:51.625778 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2025.1 725e02368046 2 hours ago 1.39GB 2026-03-28 01:20:51.625783 | orchestrator | registry.osism.tech/kolla/nova-api 2025.1 b07a8dfc89e7 2 hours ago 1.23GB 2026-03-28 01:20:51.625788 | orchestrator | registry.osism.tech/kolla/ceilometer-notification 2025.1 4b1ebf917aff 2 hours ago 996MB 2026-03-28 01:20:51.625793 | orchestrator | registry.osism.tech/kolla/ceilometer-central 2025.1 b2297933aaf4 2 hours ago 997MB 2026-03-28 01:20:51.625798 | orchestrator | registry.osism.tech/kolla/aodh-api 2025.1 b7b3582c5514 2 hours ago 994MB 2026-03-28 01:20:51.625837 | orchestrator | registry.osism.tech/kolla/aodh-listener 2025.1 8dcbd23193b5 2 hours ago 995MB 2026-03-28 01:20:51.625843 | orchestrator | registry.osism.tech/kolla/aodh-notifier 2025.1 a70e20bdcd16 2 hours ago 995MB 2026-03-28 01:20:51.625848 | orchestrator | registry.osism.tech/kolla/aodh-evaluator 2025.1 6b1e0f90273a 2 hours ago 995MB 2026-03-28 01:20:51.625853 | orchestrator | registry.osism.tech/kolla/neutron-server 2025.1 4d4d4e3bae76 2 hours ago 1.24GB 2026-03-28 01:20:51.625858 | orchestrator | registry.osism.tech/kolla/placement-api 2025.1 b18f608d97d7 2 hours ago 996MB 2026-03-28 01:20:51.625864 | orchestrator | registry.osism.tech/kolla/skyline-apiserver 2025.1 decb4d5063d6 2 hours ago 1.02GB 2026-03-28 01:20:51.625868 | orchestrator | registry.osism.tech/kolla/skyline-console 2025.1 478effaaf6ca 2 hours ago 1.07GB 2026-03-28 01:20:51.625873 | orchestrator | registry.osism.tech/kolla/glance-api 2025.1 1318608158fb 2 hours ago 1.12GB 2026-03-28 01:20:51.625878 | orchestrator | registry.osism.tech/kolla/designate-producer 2025.1 a8f70f35c0b3 2 hours ago 1GB 2026-03-28 01:20:51.625884 | orchestrator | registry.osism.tech/kolla/designate-central 2025.1 c18a44ccc8ef 2 hours ago 1GB 2026-03-28 01:20:51.625889 | orchestrator | registry.osism.tech/kolla/designate-mdns 2025.1 28a641be6b30 2 hours ago 1GB 2026-03-28 01:20:51.625893 | orchestrator | registry.osism.tech/kolla/designate-worker 2025.1 d2273672a0e3 2 hours ago 1.01GB 2026-03-28 01:20:51.625898 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2025.1 950cc279fc53 2 hours ago 1.01GB 2026-03-28 01:20:51.625903 | orchestrator | registry.osism.tech/kolla/designate-api 2025.1 6dcc1980ff9d 2 hours ago 1GB 2026-03-28 01:20:51.625908 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2025.1 49f86d479094 2 hours ago 1GB 2026-03-28 01:20:51.625914 | orchestrator | registry.osism.tech/kolla/barbican-worker 2025.1 1c24b2b2f295 2 hours ago 1GB 2026-03-28 01:20:51.625919 | orchestrator | registry.osism.tech/kolla/barbican-api 2025.1 df9652a1fc0a 2 hours ago 1GB 2026-03-28 01:20:51.625924 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2025.1 0ecf325b1cec 2 hours ago 1.05GB 2026-03-28 01:20:51.625929 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2025.1 051c1f187320 2 hours ago 1.07GB 2026-03-28 01:20:51.625941 | orchestrator | registry.osism.tech/kolla/octavia-api 2025.1 b22536b55e1b 2 hours ago 1.07GB 2026-03-28 01:20:51.625946 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2025.1 6f11cf84ebfc 2 hours ago 1.05GB 2026-03-28 01:20:51.625951 | orchestrator | registry.osism.tech/kolla/octavia-worker 2025.1 78c3d9e61533 2 hours ago 1.05GB 2026-03-28 01:20:51.625956 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2025.1 2fd2cf1ca325 2 hours ago 1.27GB 2026-03-28 01:20:51.625961 | orchestrator | registry.osism.tech/kolla/magnum-api 2025.1 6cd0444f3c8f 2 hours ago 1.15GB 2026-03-28 01:20:51.625966 | orchestrator | registry.osism.tech/kolla/ovn-controller 2025.1 3647e85523a1 2 hours ago 301MB 2026-03-28 01:20:51.625971 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2025.1 678e068627bb 2 hours ago 301MB 2026-03-28 01:20:51.625976 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2025.1 2fc4bcc97265 2 hours ago 301MB 2026-03-28 01:20:51.625981 | orchestrator | registry.osism.tech/kolla/ovn-northd 2025.1 53896efa059c 2 hours ago 301MB 2026-03-28 01:20:51.625986 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-relay 2025.1 e081c4714e8e 2 hours ago 301MB 2026-03-28 01:20:51.625996 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 38e4762011f6 21 hours ago 1.35GB 2026-03-28 01:20:51.795614 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-28 01:20:51.795874 | orchestrator | ++ semver latest 5.0.0 2026-03-28 01:20:51.858106 | orchestrator | 2026-03-28 01:20:51.858224 | orchestrator | ## Containers @ testbed-node-1 2026-03-28 01:20:51.858237 | orchestrator | 2026-03-28 01:20:51.858245 | orchestrator | + [[ -1 -eq -1 ]] 2026-03-28 01:20:51.858253 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-28 01:20:51.858260 | orchestrator | + echo 2026-03-28 01:20:51.858268 | orchestrator | + echo '## Containers @ testbed-node-1' 2026-03-28 01:20:51.858277 | orchestrator | + echo 2026-03-28 01:20:51.858284 | orchestrator | + osism container testbed-node-1 ps 2026-03-28 01:20:53.470103 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-28 01:20:53.470257 | orchestrator | 3a37aedc22b4 registry.osism.tech/kolla/octavia-worker:2025.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-03-28 01:20:53.470299 | orchestrator | 6ca44ee93e4f registry.osism.tech/kolla/octavia-housekeeping:2025.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-03-28 01:20:53.470318 | orchestrator | e1ab2f4a6bd4 registry.osism.tech/kolla/octavia-health-manager:2025.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-03-28 01:20:53.471117 | orchestrator | 87cc82b18432 registry.osism.tech/kolla/octavia-driver-agent:2025.1 "dumb-init --single-…" 5 minutes ago Up 4 minutes octavia_driver_agent 2026-03-28 01:20:53.471156 | orchestrator | 825a7cc53285 registry.osism.tech/kolla/octavia-api:2025.1 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2026-03-28 01:20:53.471174 | orchestrator | 944c0124617c registry.osism.tech/kolla/nova-novncproxy:2025.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2026-03-28 01:20:53.471192 | orchestrator | 0e140a04a9a4 registry.osism.tech/kolla/magnum-conductor:2025.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) magnum_conductor 2026-03-28 01:20:53.471209 | orchestrator | 32edc8947b04 registry.osism.tech/kolla/nova-conductor:2025.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_conductor 2026-03-28 01:20:53.471227 | orchestrator | b805989c3e41 registry.osism.tech/kolla/magnum-api:2025.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) magnum_api 2026-03-28 01:20:53.471246 | orchestrator | 481cef858343 registry.osism.tech/kolla/grafana:2025.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes grafana 2026-03-28 01:20:53.471264 | orchestrator | baa960b9b1aa registry.osism.tech/kolla/placement-api:2025.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) placement_api 2026-03-28 01:20:53.471305 | orchestrator | fba904a31f6b registry.osism.tech/kolla/designate-worker:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_worker 2026-03-28 01:20:53.471318 | orchestrator | f2760937b394 registry.osism.tech/kolla/designate-mdns:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_mdns 2026-03-28 01:20:53.471329 | orchestrator | 62349765abb0 registry.osism.tech/kolla/designate-producer:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_producer 2026-03-28 01:20:53.471340 | orchestrator | 02a896ae146c registry.osism.tech/kolla/designate-central:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_central 2026-03-28 01:20:53.471351 | orchestrator | 81adb0c02859 registry.osism.tech/kolla/designate-api:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_api 2026-03-28 01:20:53.471384 | orchestrator | 06396e41c4aa registry.osism.tech/kolla/designate-backend-bind9:2025.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) designate_backend_bind9 2026-03-28 01:20:53.471396 | orchestrator | 4c9d633f10af registry.osism.tech/kolla/nova-api:2025.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) nova_metadata 2026-03-28 01:20:53.471407 | orchestrator | f5b47d736fa4 registry.osism.tech/kolla/nova-api:2025.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) nova_api 2026-03-28 01:20:53.471418 | orchestrator | 01d2b829180f registry.osism.tech/kolla/nova-scheduler:2025.1 "dumb-init --single-…" 12 minutes ago Up 10 minutes (healthy) nova_scheduler 2026-03-28 01:20:53.471429 | orchestrator | 13a453463c1a registry.osism.tech/kolla/neutron-server:2025.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) neutron_server 2026-03-28 01:20:53.471440 | orchestrator | e6b5ec5851a6 registry.osism.tech/kolla/barbican-worker:2025.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_worker 2026-03-28 01:20:53.471451 | orchestrator | 028bb22ab7db registry.osism.tech/kolla/barbican-keystone-listener:2025.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_keystone_listener 2026-03-28 01:20:53.471462 | orchestrator | 242d555940ef registry.osism.tech/kolla/barbican-api:2025.1 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) barbican_api 2026-03-28 01:20:53.471473 | orchestrator | 18c6a4c3e3e4 registry.osism.tech/kolla/cinder-backup:2025.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_backup 2026-03-28 01:20:53.471498 | orchestrator | a30e7fafd85c registry.osism.tech/kolla/cinder-volume:2025.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_volume 2026-03-28 01:20:53.471509 | orchestrator | 0f13433a9a7e registry.osism.tech/kolla/glance-api:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) glance_api 2026-03-28 01:20:53.471520 | orchestrator | 074684f2e209 registry.osism.tech/kolla/cinder-scheduler:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) cinder_scheduler 2026-03-28 01:20:53.471531 | orchestrator | 9e6a18096454 registry.osism.tech/kolla/cinder-api:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) cinder_api 2026-03-28 01:20:53.471542 | orchestrator | 60d1fac29ffa registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_elasticsearch_exporter 2026-03-28 01:20:53.471554 | orchestrator | f2f01319d47f registry.osism.tech/kolla/prometheus-cadvisor:2025.1 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_cadvisor 2026-03-28 01:20:53.471565 | orchestrator | 128b5dc41ee8 registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_memcached_exporter 2026-03-28 01:20:53.471576 | orchestrator | 21093ed7c037 registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1 "dumb-init --single-…" 18 minutes ago Up 18 minutes prometheus_mysqld_exporter 2026-03-28 01:20:53.471587 | orchestrator | 88311b3d7625 registry.osism.tech/kolla/prometheus-node-exporter:2025.1 "dumb-init --single-…" 18 minutes ago Up 18 minutes prometheus_node_exporter 2026-03-28 01:20:53.471604 | orchestrator | a2b7488990cc registry.osism.tech/kolla/keystone:2025.1 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone 2026-03-28 01:20:53.471623 | orchestrator | b203f03fa325 registry.osism.tech/kolla/keystone-fernet:2025.1 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_fernet 2026-03-28 01:20:53.471634 | orchestrator | 6ebd68706fb6 registry.osism.tech/kolla/horizon:2025.1 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) horizon 2026-03-28 01:20:53.471645 | orchestrator | e004005a53c1 registry.osism.tech/kolla/keystone-ssh:2025.1 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_ssh 2026-03-28 01:20:53.471656 | orchestrator | 1e09a3d238c2 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 19 minutes ago Up 19 minutes ceph-mgr-testbed-node-1 2026-03-28 01:20:53.471667 | orchestrator | 053f2873aa82 registry.osism.tech/kolla/opensearch-dashboards:2025.1 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch_dashboards 2026-03-28 01:20:53.471678 | orchestrator | e2f6b522d880 registry.osism.tech/kolla/mariadb-server:2025.1 "dumb-init -- kolla_…" 22 minutes ago Up 22 minutes (healthy) mariadb 2026-03-28 01:20:53.471689 | orchestrator | cf75252a0f98 registry.osism.tech/kolla/opensearch:2025.1 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) opensearch 2026-03-28 01:20:53.471699 | orchestrator | 04a319f85700 registry.osism.tech/kolla/keepalived:2025.1 "dumb-init --single-…" 24 minutes ago Up 24 minutes keepalived 2026-03-28 01:20:53.471710 | orchestrator | 8ffc87a16b27 registry.osism.tech/kolla/proxysql:2025.1 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) proxysql 2026-03-28 01:20:53.471721 | orchestrator | ea6542540bdb registry.osism.tech/kolla/haproxy:2025.1 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) haproxy 2026-03-28 01:20:53.471732 | orchestrator | 74a2150ed281 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 25 minutes ago Up 25 minutes ceph-crash-testbed-node-1 2026-03-28 01:20:53.471743 | orchestrator | e4ed484e589e registry.osism.tech/kolla/ovn-northd:2025.1 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_northd 2026-03-28 01:20:53.471754 | orchestrator | d429fa856cbc registry.osism.tech/kolla/ovn-sb-db-relay:2025.1 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_sb_db_relay_1 2026-03-28 01:20:53.471773 | orchestrator | a49d044cccb2 registry.osism.tech/kolla/ovn-sb-db-server:2025.1 "dumb-init --single-…" 28 minutes ago Up 26 minutes ovn_sb_db 2026-03-28 01:20:53.471784 | orchestrator | d0698e3308fe registry.osism.tech/kolla/ovn-nb-db-server:2025.1 "dumb-init --single-…" 28 minutes ago Up 26 minutes ovn_nb_db 2026-03-28 01:20:53.471795 | orchestrator | 55d071b61070 registry.osism.tech/kolla/rabbitmq:2025.1 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) rabbitmq 2026-03-28 01:20:53.471806 | orchestrator | ec3dcc207e9d registry.osism.tech/kolla/ovn-controller:2025.1 "dumb-init --single-…" 30 minutes ago Up 30 minutes ovn_controller 2026-03-28 01:20:53.471865 | orchestrator | 3f104503cce9 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 30 minutes ago Up 30 minutes ceph-mon-testbed-node-1 2026-03-28 01:20:53.471877 | orchestrator | 9b468c5bbce7 registry.osism.tech/kolla/openvswitch-vswitchd:2025.1 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) openvswitch_vswitchd 2026-03-28 01:20:53.471888 | orchestrator | 2ba7d83bc41a registry.osism.tech/kolla/openvswitch-db-server:2025.1 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) openvswitch_db 2026-03-28 01:20:53.471907 | orchestrator | 780256dbb9fc registry.osism.tech/kolla/redis-sentinel:2025.1 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) redis_sentinel 2026-03-28 01:20:53.471918 | orchestrator | 95afd87fc796 registry.osism.tech/kolla/redis:2025.1 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) redis 2026-03-28 01:20:53.471929 | orchestrator | 9e261470be0c registry.osism.tech/kolla/memcached:2025.1 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) memcached 2026-03-28 01:20:53.471940 | orchestrator | addc81b31dce registry.osism.tech/kolla/cron:2025.1 "dumb-init --single-…" 32 minutes ago Up 32 minutes cron 2026-03-28 01:20:53.471951 | orchestrator | 2fffe2c82792 registry.osism.tech/kolla/kolla-toolbox:2025.1 "dumb-init --single-…" 33 minutes ago Up 33 minutes kolla_toolbox 2026-03-28 01:20:53.471961 | orchestrator | dda88459a726 registry.osism.tech/kolla/fluentd:2025.1 "dumb-init --single-…" 33 minutes ago Up 33 minutes fluentd 2026-03-28 01:20:53.633548 | orchestrator | 2026-03-28 01:20:53.633649 | orchestrator | ## Images @ testbed-node-1 2026-03-28 01:20:53.633661 | orchestrator | 2026-03-28 01:20:53.633669 | orchestrator | + echo 2026-03-28 01:20:53.633677 | orchestrator | + echo '## Images @ testbed-node-1' 2026-03-28 01:20:53.633685 | orchestrator | + echo 2026-03-28 01:20:53.633693 | orchestrator | + osism container testbed-node-1 images 2026-03-28 01:20:55.255671 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-28 01:20:55.255786 | orchestrator | registry.osism.tech/kolla/opensearch 2025.1 f23a473dee09 2 hours ago 1.57GB 2026-03-28 01:20:55.255802 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2025.1 34b1b2855d6c 2 hours ago 1.54GB 2026-03-28 01:20:55.255916 | orchestrator | registry.osism.tech/kolla/memcached 2025.1 a2aedec6ec4d 2 hours ago 277MB 2026-03-28 01:20:55.255932 | orchestrator | registry.osism.tech/kolla/haproxy 2025.1 9fcb0ecf3f54 2 hours ago 285MB 2026-03-28 01:20:55.255943 | orchestrator | registry.osism.tech/kolla/fluentd 2025.1 ec0ad576226f 2 hours ago 590MB 2026-03-28 01:20:55.255954 | orchestrator | registry.osism.tech/kolla/rabbitmq 2025.1 318460cdffbc 2 hours ago 350MB 2026-03-28 01:20:55.255965 | orchestrator | registry.osism.tech/kolla/grafana 2025.1 c65a025a8533 2 hours ago 1.04GB 2026-03-28 01:20:55.255977 | orchestrator | registry.osism.tech/kolla/keepalived 2025.1 7cea4a3318a5 2 hours ago 288MB 2026-03-28 01:20:55.255989 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2025.1 257d2db70ac0 2 hours ago 683MB 2026-03-28 01:20:55.256007 | orchestrator | registry.osism.tech/kolla/proxysql 2025.1 2eea1d3fab86 2 hours ago 427MB 2026-03-28 01:20:55.256025 | orchestrator | registry.osism.tech/kolla/cron 2025.1 d7b2ad1eef56 2 hours ago 277MB 2026-03-28 01:20:55.256042 | orchestrator | registry.osism.tech/kolla/mariadb-server 2025.1 e8ace4d61cba 2 hours ago 463MB 2026-03-28 01:20:55.256062 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2025.1 db2de2398535 2 hours ago 303MB 2026-03-28 01:20:55.256081 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2025.1 9a366225a743 2 hours ago 309MB 2026-03-28 01:20:55.256100 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2025.1 d318aed80f13 2 hours ago 312MB 2026-03-28 01:20:55.256118 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2025.1 a1ccb6a09cb0 2 hours ago 368MB 2026-03-28 01:20:55.256135 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2025.1 8773d2fa8fd1 2 hours ago 317MB 2026-03-28 01:20:55.256181 | orchestrator | registry.osism.tech/kolla/horizon 2025.1 8018db18e8ff 2 hours ago 1.2GB 2026-03-28 01:20:55.256201 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2025.1 03a5f4ad1227 2 hours ago 293MB 2026-03-28 01:20:55.256218 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2025.1 19794e56d254 2 hours ago 293MB 2026-03-28 01:20:55.256231 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2025.1 6b3cacdc117d 2 hours ago 284MB 2026-03-28 01:20:55.256250 | orchestrator | registry.osism.tech/kolla/redis 2025.1 0fbfe81f63d7 2 hours ago 284MB 2026-03-28 01:20:55.256269 | orchestrator | registry.osism.tech/kolla/keystone 2025.1 730312f6cfac 2 hours ago 1.09GB 2026-03-28 01:20:55.256288 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2025.1 855d416f8ca2 2 hours ago 1.06GB 2026-03-28 01:20:55.256307 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2025.1 6d866228e01b 2 hours ago 1.05GB 2026-03-28 01:20:55.256328 | orchestrator | registry.osism.tech/kolla/cinder-api 2025.1 adc478af8a76 2 hours ago 1.43GB 2026-03-28 01:20:55.256346 | orchestrator | registry.osism.tech/kolla/cinder-backup 2025.1 9cc74e3d2ba3 2 hours ago 1.44GB 2026-03-28 01:20:55.256364 | orchestrator | registry.osism.tech/kolla/cinder-volume 2025.1 f0c4321acd9c 2 hours ago 1.79GB 2026-03-28 01:20:55.256377 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2025.1 16d5ac98152c 2 hours ago 1.43GB 2026-03-28 01:20:55.256428 | orchestrator | registry.osism.tech/kolla/nova-conductor 2025.1 770e900dc0dd 2 hours ago 1.23GB 2026-03-28 01:20:55.256448 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2025.1 1d4bbfd9e3e8 2 hours ago 1.23GB 2026-03-28 01:20:55.256466 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2025.1 725e02368046 2 hours ago 1.39GB 2026-03-28 01:20:55.256484 | orchestrator | registry.osism.tech/kolla/nova-api 2025.1 b07a8dfc89e7 2 hours ago 1.23GB 2026-03-28 01:20:55.256502 | orchestrator | registry.osism.tech/kolla/neutron-server 2025.1 4d4d4e3bae76 2 hours ago 1.24GB 2026-03-28 01:20:55.256520 | orchestrator | registry.osism.tech/kolla/placement-api 2025.1 b18f608d97d7 2 hours ago 996MB 2026-03-28 01:20:55.256538 | orchestrator | registry.osism.tech/kolla/glance-api 2025.1 1318608158fb 2 hours ago 1.12GB 2026-03-28 01:20:55.256579 | orchestrator | registry.osism.tech/kolla/designate-producer 2025.1 a8f70f35c0b3 2 hours ago 1GB 2026-03-28 01:20:55.256596 | orchestrator | registry.osism.tech/kolla/designate-central 2025.1 c18a44ccc8ef 2 hours ago 1GB 2026-03-28 01:20:55.256614 | orchestrator | registry.osism.tech/kolla/designate-mdns 2025.1 28a641be6b30 2 hours ago 1GB 2026-03-28 01:20:55.256630 | orchestrator | registry.osism.tech/kolla/designate-worker 2025.1 d2273672a0e3 2 hours ago 1.01GB 2026-03-28 01:20:55.256647 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2025.1 950cc279fc53 2 hours ago 1.01GB 2026-03-28 01:20:55.256661 | orchestrator | registry.osism.tech/kolla/designate-api 2025.1 6dcc1980ff9d 2 hours ago 1GB 2026-03-28 01:20:55.256677 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2025.1 49f86d479094 2 hours ago 1GB 2026-03-28 01:20:55.256694 | orchestrator | registry.osism.tech/kolla/barbican-worker 2025.1 1c24b2b2f295 2 hours ago 1GB 2026-03-28 01:20:55.256709 | orchestrator | registry.osism.tech/kolla/barbican-api 2025.1 df9652a1fc0a 2 hours ago 1GB 2026-03-28 01:20:55.256725 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2025.1 0ecf325b1cec 2 hours ago 1.05GB 2026-03-28 01:20:55.256758 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2025.1 051c1f187320 2 hours ago 1.07GB 2026-03-28 01:20:55.256793 | orchestrator | registry.osism.tech/kolla/octavia-api 2025.1 b22536b55e1b 2 hours ago 1.07GB 2026-03-28 01:20:55.256841 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2025.1 6f11cf84ebfc 2 hours ago 1.05GB 2026-03-28 01:20:55.256860 | orchestrator | registry.osism.tech/kolla/octavia-worker 2025.1 78c3d9e61533 2 hours ago 1.05GB 2026-03-28 01:20:55.256878 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2025.1 2fd2cf1ca325 2 hours ago 1.27GB 2026-03-28 01:20:55.256895 | orchestrator | registry.osism.tech/kolla/magnum-api 2025.1 6cd0444f3c8f 2 hours ago 1.15GB 2026-03-28 01:20:55.256912 | orchestrator | registry.osism.tech/kolla/ovn-controller 2025.1 3647e85523a1 2 hours ago 301MB 2026-03-28 01:20:55.256930 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2025.1 678e068627bb 2 hours ago 301MB 2026-03-28 01:20:55.256948 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2025.1 2fc4bcc97265 2 hours ago 301MB 2026-03-28 01:20:55.256968 | orchestrator | registry.osism.tech/kolla/ovn-northd 2025.1 53896efa059c 2 hours ago 301MB 2026-03-28 01:20:55.256985 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-relay 2025.1 e081c4714e8e 2 hours ago 301MB 2026-03-28 01:20:55.257004 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 38e4762011f6 21 hours ago 1.35GB 2026-03-28 01:20:55.425529 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-28 01:20:55.425787 | orchestrator | ++ semver latest 5.0.0 2026-03-28 01:20:55.490723 | orchestrator | 2026-03-28 01:20:55.490845 | orchestrator | ## Containers @ testbed-node-2 2026-03-28 01:20:55.490861 | orchestrator | 2026-03-28 01:20:55.490869 | orchestrator | + [[ -1 -eq -1 ]] 2026-03-28 01:20:55.490887 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-28 01:20:55.490896 | orchestrator | + echo 2026-03-28 01:20:55.490905 | orchestrator | + echo '## Containers @ testbed-node-2' 2026-03-28 01:20:55.490914 | orchestrator | + echo 2026-03-28 01:20:55.490922 | orchestrator | + osism container testbed-node-2 ps 2026-03-28 01:20:57.060270 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-28 01:20:57.060384 | orchestrator | 82a2110e05d8 registry.osism.tech/kolla/octavia-worker:2025.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-03-28 01:20:57.060395 | orchestrator | e526f0118a92 registry.osism.tech/kolla/octavia-housekeeping:2025.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-03-28 01:20:57.060400 | orchestrator | b24daf2262ff registry.osism.tech/kolla/octavia-health-manager:2025.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-03-28 01:20:57.060404 | orchestrator | e29e162d58ce registry.osism.tech/kolla/octavia-driver-agent:2025.1 "dumb-init --single-…" 5 minutes ago Up 5 minutes octavia_driver_agent 2026-03-28 01:20:57.060409 | orchestrator | 940fa5a0ab17 registry.osism.tech/kolla/octavia-api:2025.1 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2026-03-28 01:20:57.060413 | orchestrator | e97bd9a9fd03 registry.osism.tech/kolla/nova-novncproxy:2025.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2026-03-28 01:20:57.060418 | orchestrator | a5727aa7db4a registry.osism.tech/kolla/magnum-conductor:2025.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) magnum_conductor 2026-03-28 01:20:57.060422 | orchestrator | e82d1cfea0ab registry.osism.tech/kolla/nova-conductor:2025.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_conductor 2026-03-28 01:20:57.060427 | orchestrator | d7fc7c8eaae3 registry.osism.tech/kolla/magnum-api:2025.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) magnum_api 2026-03-28 01:20:57.060448 | orchestrator | b8cc9d9ffd5e registry.osism.tech/kolla/grafana:2025.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes grafana 2026-03-28 01:20:57.060453 | orchestrator | 3b5014183709 registry.osism.tech/kolla/placement-api:2025.1 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) placement_api 2026-03-28 01:20:57.060457 | orchestrator | a15e69baa33e registry.osism.tech/kolla/designate-worker:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_worker 2026-03-28 01:20:57.060461 | orchestrator | 7673677747c0 registry.osism.tech/kolla/designate-mdns:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_mdns 2026-03-28 01:20:57.060466 | orchestrator | f0fe0ac504c9 registry.osism.tech/kolla/designate-producer:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_producer 2026-03-28 01:20:57.060470 | orchestrator | 49337e935b63 registry.osism.tech/kolla/designate-central:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_central 2026-03-28 01:20:57.060474 | orchestrator | 64ac4abdd689 registry.osism.tech/kolla/designate-api:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_api 2026-03-28 01:20:57.060478 | orchestrator | 586e5406cd6c registry.osism.tech/kolla/designate-backend-bind9:2025.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) designate_backend_bind9 2026-03-28 01:20:57.060482 | orchestrator | 54feba3282ae registry.osism.tech/kolla/nova-api:2025.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) nova_metadata 2026-03-28 01:20:57.060486 | orchestrator | 861f30a69720 registry.osism.tech/kolla/nova-api:2025.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) nova_api 2026-03-28 01:20:57.060491 | orchestrator | ba59bd5ae52b registry.osism.tech/kolla/nova-scheduler:2025.1 "dumb-init --single-…" 12 minutes ago Up 10 minutes (healthy) nova_scheduler 2026-03-28 01:20:57.060495 | orchestrator | 558391d01d12 registry.osism.tech/kolla/neutron-server:2025.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) neutron_server 2026-03-28 01:20:57.060509 | orchestrator | c6d32589e8a9 registry.osism.tech/kolla/barbican-worker:2025.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_worker 2026-03-28 01:20:57.060550 | orchestrator | 85966c49097d registry.osism.tech/kolla/barbican-keystone-listener:2025.1 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) barbican_keystone_listener 2026-03-28 01:20:57.060568 | orchestrator | ce0b257291c9 registry.osism.tech/kolla/barbican-api:2025.1 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) barbican_api 2026-03-28 01:20:57.060572 | orchestrator | 2104e3424136 registry.osism.tech/kolla/cinder-backup:2025.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_backup 2026-03-28 01:20:57.060576 | orchestrator | dd8ed7a8a571 registry.osism.tech/kolla/cinder-volume:2025.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_volume 2026-03-28 01:20:57.060580 | orchestrator | 7fa12a1faf1c registry.osism.tech/kolla/glance-api:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) glance_api 2026-03-28 01:20:57.060585 | orchestrator | 377fc463cb2a registry.osism.tech/kolla/cinder-scheduler:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) cinder_scheduler 2026-03-28 01:20:57.060593 | orchestrator | ed2e1673f34b registry.osism.tech/kolla/cinder-api:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) cinder_api 2026-03-28 01:20:57.060598 | orchestrator | b92cb5dc2983 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_elasticsearch_exporter 2026-03-28 01:20:57.060603 | orchestrator | 6554cc52e133 registry.osism.tech/kolla/prometheus-cadvisor:2025.1 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_cadvisor 2026-03-28 01:20:57.060607 | orchestrator | 6a0af18e8275 registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1 "dumb-init --single-…" 17 minutes ago Up 17 minutes prometheus_memcached_exporter 2026-03-28 01:20:57.060614 | orchestrator | 3e8f4e8014d6 registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1 "dumb-init --single-…" 18 minutes ago Up 18 minutes prometheus_mysqld_exporter 2026-03-28 01:20:57.060619 | orchestrator | 6e885fd51f87 registry.osism.tech/kolla/prometheus-node-exporter:2025.1 "dumb-init --single-…" 18 minutes ago Up 18 minutes prometheus_node_exporter 2026-03-28 01:20:57.060623 | orchestrator | 8beb28927947 registry.osism.tech/kolla/keystone:2025.1 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone 2026-03-28 01:20:57.060627 | orchestrator | 9dfd9b21ac87 registry.osism.tech/kolla/keystone-fernet:2025.1 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_fernet 2026-03-28 01:20:57.060631 | orchestrator | 793c19612e21 registry.osism.tech/kolla/horizon:2025.1 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) horizon 2026-03-28 01:20:57.060635 | orchestrator | bea3dda8340b registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 19 minutes ago Up 19 minutes ceph-mgr-testbed-node-2 2026-03-28 01:20:57.060728 | orchestrator | 6262cbc7ed16 registry.osism.tech/kolla/keystone-ssh:2025.1 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_ssh 2026-03-28 01:20:57.060736 | orchestrator | 601201aa1e47 registry.osism.tech/kolla/opensearch-dashboards:2025.1 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch_dashboards 2026-03-28 01:20:57.060740 | orchestrator | f8ac6b155cd4 registry.osism.tech/kolla/mariadb-server:2025.1 "dumb-init -- kolla_…" 22 minutes ago Up 22 minutes (healthy) mariadb 2026-03-28 01:20:57.060744 | orchestrator | abfc0fe7d4de registry.osism.tech/kolla/opensearch:2025.1 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) opensearch 2026-03-28 01:20:57.060748 | orchestrator | ad64d2df9cdc registry.osism.tech/kolla/keepalived:2025.1 "dumb-init --single-…" 24 minutes ago Up 24 minutes keepalived 2026-03-28 01:20:57.060752 | orchestrator | 3a2a2004e019 registry.osism.tech/kolla/proxysql:2025.1 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) proxysql 2026-03-28 01:20:57.060756 | orchestrator | 140c33e73502 registry.osism.tech/kolla/haproxy:2025.1 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) haproxy 2026-03-28 01:20:57.060761 | orchestrator | 6f72bb19f432 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 25 minutes ago Up 25 minutes ceph-crash-testbed-node-2 2026-03-28 01:20:57.060765 | orchestrator | 6b92e1b1276d registry.osism.tech/kolla/ovn-northd:2025.1 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_northd 2026-03-28 01:20:57.060769 | orchestrator | 85289777ee00 registry.osism.tech/kolla/ovn-sb-db-relay:2025.1 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_sb_db_relay_1 2026-03-28 01:20:57.060778 | orchestrator | d9e7b6a11a0b registry.osism.tech/kolla/ovn-sb-db-server:2025.1 "dumb-init --single-…" 28 minutes ago Up 26 minutes ovn_sb_db 2026-03-28 01:20:57.060782 | orchestrator | 9c5b1353772d registry.osism.tech/kolla/ovn-nb-db-server:2025.1 "dumb-init --single-…" 28 minutes ago Up 26 minutes ovn_nb_db 2026-03-28 01:20:57.060786 | orchestrator | e5cb34f50632 registry.osism.tech/kolla/rabbitmq:2025.1 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) rabbitmq 2026-03-28 01:20:57.060790 | orchestrator | dd94d28ede3d registry.osism.tech/kolla/ovn-controller:2025.1 "dumb-init --single-…" 30 minutes ago Up 30 minutes ovn_controller 2026-03-28 01:20:57.060794 | orchestrator | 698db66fc3b9 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 30 minutes ago Up 30 minutes ceph-mon-testbed-node-2 2026-03-28 01:20:57.060799 | orchestrator | f84304386f4f registry.osism.tech/kolla/openvswitch-vswitchd:2025.1 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) openvswitch_vswitchd 2026-03-28 01:20:57.060803 | orchestrator | cca504c87379 registry.osism.tech/kolla/openvswitch-db-server:2025.1 "dumb-init --single-…" 32 minutes ago Up 31 minutes (healthy) openvswitch_db 2026-03-28 01:20:57.060842 | orchestrator | 913325b7482a registry.osism.tech/kolla/redis-sentinel:2025.1 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) redis_sentinel 2026-03-28 01:20:57.060847 | orchestrator | 278d7523bf68 registry.osism.tech/kolla/redis:2025.1 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) redis 2026-03-28 01:20:57.060851 | orchestrator | aba74d22ac81 registry.osism.tech/kolla/memcached:2025.1 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) memcached 2026-03-28 01:20:57.060856 | orchestrator | 24e211c32844 registry.osism.tech/kolla/cron:2025.1 "dumb-init --single-…" 32 minutes ago Up 32 minutes cron 2026-03-28 01:20:57.060860 | orchestrator | d86e0b86cc61 registry.osism.tech/kolla/kolla-toolbox:2025.1 "dumb-init --single-…" 33 minutes ago Up 33 minutes kolla_toolbox 2026-03-28 01:20:57.060867 | orchestrator | 88c4c2467aac registry.osism.tech/kolla/fluentd:2025.1 "dumb-init --single-…" 33 minutes ago Up 33 minutes fluentd 2026-03-28 01:20:57.249125 | orchestrator | 2026-03-28 01:20:57.249205 | orchestrator | ## Images @ testbed-node-2 2026-03-28 01:20:57.249214 | orchestrator | 2026-03-28 01:20:57.249221 | orchestrator | + echo 2026-03-28 01:20:57.249228 | orchestrator | + echo '## Images @ testbed-node-2' 2026-03-28 01:20:57.249235 | orchestrator | + echo 2026-03-28 01:20:57.249242 | orchestrator | + osism container testbed-node-2 images 2026-03-28 01:20:58.799524 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-28 01:20:58.799633 | orchestrator | registry.osism.tech/kolla/opensearch 2025.1 f23a473dee09 2 hours ago 1.57GB 2026-03-28 01:20:58.799650 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2025.1 34b1b2855d6c 2 hours ago 1.54GB 2026-03-28 01:20:58.799661 | orchestrator | registry.osism.tech/kolla/memcached 2025.1 a2aedec6ec4d 2 hours ago 277MB 2026-03-28 01:20:58.799673 | orchestrator | registry.osism.tech/kolla/haproxy 2025.1 9fcb0ecf3f54 2 hours ago 285MB 2026-03-28 01:20:58.799684 | orchestrator | registry.osism.tech/kolla/fluentd 2025.1 ec0ad576226f 2 hours ago 590MB 2026-03-28 01:20:58.799695 | orchestrator | registry.osism.tech/kolla/rabbitmq 2025.1 318460cdffbc 2 hours ago 350MB 2026-03-28 01:20:58.799705 | orchestrator | registry.osism.tech/kolla/grafana 2025.1 c65a025a8533 2 hours ago 1.04GB 2026-03-28 01:20:58.799759 | orchestrator | registry.osism.tech/kolla/keepalived 2025.1 7cea4a3318a5 2 hours ago 288MB 2026-03-28 01:20:58.799771 | orchestrator | registry.osism.tech/kolla/proxysql 2025.1 2eea1d3fab86 2 hours ago 427MB 2026-03-28 01:20:58.799794 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2025.1 257d2db70ac0 2 hours ago 683MB 2026-03-28 01:20:58.799856 | orchestrator | registry.osism.tech/kolla/cron 2025.1 d7b2ad1eef56 2 hours ago 277MB 2026-03-28 01:20:58.799871 | orchestrator | registry.osism.tech/kolla/mariadb-server 2025.1 e8ace4d61cba 2 hours ago 463MB 2026-03-28 01:20:58.799883 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2025.1 db2de2398535 2 hours ago 303MB 2026-03-28 01:20:58.799893 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2025.1 9a366225a743 2 hours ago 309MB 2026-03-28 01:20:58.799904 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2025.1 d318aed80f13 2 hours ago 312MB 2026-03-28 01:20:58.799915 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2025.1 a1ccb6a09cb0 2 hours ago 368MB 2026-03-28 01:20:58.799926 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2025.1 8773d2fa8fd1 2 hours ago 317MB 2026-03-28 01:20:58.799937 | orchestrator | registry.osism.tech/kolla/horizon 2025.1 8018db18e8ff 2 hours ago 1.2GB 2026-03-28 01:20:58.799948 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2025.1 03a5f4ad1227 2 hours ago 293MB 2026-03-28 01:20:58.799959 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2025.1 6b3cacdc117d 2 hours ago 284MB 2026-03-28 01:20:58.799970 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2025.1 19794e56d254 2 hours ago 293MB 2026-03-28 01:20:58.799981 | orchestrator | registry.osism.tech/kolla/redis 2025.1 0fbfe81f63d7 2 hours ago 284MB 2026-03-28 01:20:58.799992 | orchestrator | registry.osism.tech/kolla/keystone 2025.1 730312f6cfac 2 hours ago 1.09GB 2026-03-28 01:20:58.800003 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2025.1 855d416f8ca2 2 hours ago 1.06GB 2026-03-28 01:20:58.800014 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2025.1 6d866228e01b 2 hours ago 1.05GB 2026-03-28 01:20:58.800025 | orchestrator | registry.osism.tech/kolla/cinder-api 2025.1 adc478af8a76 2 hours ago 1.43GB 2026-03-28 01:20:58.800053 | orchestrator | registry.osism.tech/kolla/cinder-backup 2025.1 9cc74e3d2ba3 2 hours ago 1.44GB 2026-03-28 01:20:58.800067 | orchestrator | registry.osism.tech/kolla/cinder-volume 2025.1 f0c4321acd9c 2 hours ago 1.79GB 2026-03-28 01:20:58.800080 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2025.1 16d5ac98152c 2 hours ago 1.43GB 2026-03-28 01:20:58.800093 | orchestrator | registry.osism.tech/kolla/nova-conductor 2025.1 770e900dc0dd 2 hours ago 1.23GB 2026-03-28 01:20:58.800105 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2025.1 1d4bbfd9e3e8 2 hours ago 1.23GB 2026-03-28 01:20:58.800117 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2025.1 725e02368046 2 hours ago 1.39GB 2026-03-28 01:20:58.800130 | orchestrator | registry.osism.tech/kolla/nova-api 2025.1 b07a8dfc89e7 2 hours ago 1.23GB 2026-03-28 01:20:58.800142 | orchestrator | registry.osism.tech/kolla/neutron-server 2025.1 4d4d4e3bae76 2 hours ago 1.24GB 2026-03-28 01:20:58.800154 | orchestrator | registry.osism.tech/kolla/placement-api 2025.1 b18f608d97d7 2 hours ago 996MB 2026-03-28 01:20:58.800166 | orchestrator | registry.osism.tech/kolla/glance-api 2025.1 1318608158fb 2 hours ago 1.12GB 2026-03-28 01:20:58.800205 | orchestrator | registry.osism.tech/kolla/designate-producer 2025.1 a8f70f35c0b3 2 hours ago 1GB 2026-03-28 01:20:58.800218 | orchestrator | registry.osism.tech/kolla/designate-central 2025.1 c18a44ccc8ef 2 hours ago 1GB 2026-03-28 01:20:58.800231 | orchestrator | registry.osism.tech/kolla/designate-mdns 2025.1 28a641be6b30 2 hours ago 1GB 2026-03-28 01:20:58.800243 | orchestrator | registry.osism.tech/kolla/designate-worker 2025.1 d2273672a0e3 2 hours ago 1.01GB 2026-03-28 01:20:58.800255 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2025.1 950cc279fc53 2 hours ago 1.01GB 2026-03-28 01:20:58.800268 | orchestrator | registry.osism.tech/kolla/designate-api 2025.1 6dcc1980ff9d 2 hours ago 1GB 2026-03-28 01:20:58.800280 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2025.1 49f86d479094 2 hours ago 1GB 2026-03-28 01:20:58.800292 | orchestrator | registry.osism.tech/kolla/barbican-worker 2025.1 1c24b2b2f295 2 hours ago 1GB 2026-03-28 01:20:58.800304 | orchestrator | registry.osism.tech/kolla/barbican-api 2025.1 df9652a1fc0a 2 hours ago 1GB 2026-03-28 01:20:58.800317 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2025.1 0ecf325b1cec 2 hours ago 1.05GB 2026-03-28 01:20:58.800329 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2025.1 051c1f187320 2 hours ago 1.07GB 2026-03-28 01:20:58.800341 | orchestrator | registry.osism.tech/kolla/octavia-api 2025.1 b22536b55e1b 2 hours ago 1.07GB 2026-03-28 01:20:58.800353 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2025.1 6f11cf84ebfc 2 hours ago 1.05GB 2026-03-28 01:20:58.800365 | orchestrator | registry.osism.tech/kolla/octavia-worker 2025.1 78c3d9e61533 2 hours ago 1.05GB 2026-03-28 01:20:58.800377 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2025.1 2fd2cf1ca325 2 hours ago 1.27GB 2026-03-28 01:20:58.800389 | orchestrator | registry.osism.tech/kolla/magnum-api 2025.1 6cd0444f3c8f 2 hours ago 1.15GB 2026-03-28 01:20:58.800406 | orchestrator | registry.osism.tech/kolla/ovn-controller 2025.1 3647e85523a1 2 hours ago 301MB 2026-03-28 01:20:58.800417 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2025.1 678e068627bb 2 hours ago 301MB 2026-03-28 01:20:58.800428 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2025.1 2fc4bcc97265 2 hours ago 301MB 2026-03-28 01:20:58.800440 | orchestrator | registry.osism.tech/kolla/ovn-northd 2025.1 53896efa059c 2 hours ago 301MB 2026-03-28 01:20:58.800450 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-relay 2025.1 e081c4714e8e 2 hours ago 301MB 2026-03-28 01:20:58.800461 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 38e4762011f6 21 hours ago 1.35GB 2026-03-28 01:20:58.964307 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2026-03-28 01:20:58.970590 | orchestrator | + set -e 2026-03-28 01:20:58.970672 | orchestrator | + source /opt/manager-vars.sh 2026-03-28 01:20:58.971614 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-28 01:20:58.971645 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-28 01:20:58.971654 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-28 01:20:58.971663 | orchestrator | ++ CEPH_VERSION=reef 2026-03-28 01:20:58.971672 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-28 01:20:58.971682 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-28 01:20:58.971691 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-28 01:20:58.971700 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-28 01:20:58.971709 | orchestrator | ++ export OPENSTACK_VERSION=2025.1 2026-03-28 01:20:58.971717 | orchestrator | ++ OPENSTACK_VERSION=2025.1 2026-03-28 01:20:58.971726 | orchestrator | ++ export ARA=false 2026-03-28 01:20:58.971734 | orchestrator | ++ ARA=false 2026-03-28 01:20:58.971743 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-28 01:20:58.971752 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-28 01:20:58.971761 | orchestrator | ++ export TEMPEST=true 2026-03-28 01:20:58.971769 | orchestrator | ++ TEMPEST=true 2026-03-28 01:20:58.971778 | orchestrator | ++ export IS_ZUUL=true 2026-03-28 01:20:58.971853 | orchestrator | ++ IS_ZUUL=true 2026-03-28 01:20:58.971864 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.109 2026-03-28 01:20:58.971872 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.109 2026-03-28 01:20:58.971881 | orchestrator | ++ export EXTERNAL_API=false 2026-03-28 01:20:58.971890 | orchestrator | ++ EXTERNAL_API=false 2026-03-28 01:20:58.971899 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-28 01:20:58.971907 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-28 01:20:58.971916 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-28 01:20:58.971924 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-28 01:20:58.971933 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-28 01:20:58.971942 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-28 01:20:58.971950 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-28 01:20:58.971981 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2026-03-28 01:20:58.983176 | orchestrator | + set -e 2026-03-28 01:20:58.983205 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-28 01:20:58.983214 | orchestrator | ++ export INTERACTIVE=false 2026-03-28 01:20:58.983223 | orchestrator | ++ INTERACTIVE=false 2026-03-28 01:20:58.983232 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-28 01:20:58.983240 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-28 01:20:58.983249 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-28 01:20:58.983260 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-28 01:20:58.987463 | orchestrator | 2026-03-28 01:20:58.987561 | orchestrator | # Ceph status 2026-03-28 01:20:58.987575 | orchestrator | 2026-03-28 01:20:58.987587 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-28 01:20:58.987600 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-28 01:20:58.987611 | orchestrator | + echo 2026-03-28 01:20:58.987623 | orchestrator | + echo '# Ceph status' 2026-03-28 01:20:58.987634 | orchestrator | + echo 2026-03-28 01:20:58.987645 | orchestrator | + ceph -s 2026-03-28 01:20:59.637198 | orchestrator | cluster: 2026-03-28 01:20:59.637312 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2026-03-28 01:20:59.637335 | orchestrator | health: HEALTH_OK 2026-03-28 01:20:59.637352 | orchestrator | 2026-03-28 01:20:59.637369 | orchestrator | services: 2026-03-28 01:20:59.637385 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 30m) 2026-03-28 01:20:59.637404 | orchestrator | mgr: testbed-node-0(active, since 19m), standbys: testbed-node-1, testbed-node-2 2026-03-28 01:20:59.637421 | orchestrator | mds: 1/1 daemons up, 2 standby 2026-03-28 01:20:59.637437 | orchestrator | osd: 6 osds: 6 up (since 26m), 6 in (since 27m) 2026-03-28 01:20:59.637453 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2026-03-28 01:20:59.637469 | orchestrator | 2026-03-28 01:20:59.637485 | orchestrator | data: 2026-03-28 01:20:59.637502 | orchestrator | volumes: 1/1 healthy 2026-03-28 01:20:59.637519 | orchestrator | pools: 14 pools, 401 pgs 2026-03-28 01:20:59.637535 | orchestrator | objects: 556 objects, 2.2 GiB 2026-03-28 01:20:59.637551 | orchestrator | usage: 7.0 GiB used, 113 GiB / 120 GiB avail 2026-03-28 01:20:59.637567 | orchestrator | pgs: 401 active+clean 2026-03-28 01:20:59.637583 | orchestrator | 2026-03-28 01:20:59.694645 | orchestrator | 2026-03-28 01:20:59.694752 | orchestrator | # Ceph versions 2026-03-28 01:20:59.694772 | orchestrator | 2026-03-28 01:20:59.694787 | orchestrator | + echo 2026-03-28 01:20:59.694802 | orchestrator | + echo '# Ceph versions' 2026-03-28 01:20:59.694852 | orchestrator | + echo 2026-03-28 01:20:59.694865 | orchestrator | + ceph versions 2026-03-28 01:21:00.330093 | orchestrator | { 2026-03-28 01:21:00.330187 | orchestrator | "mon": { 2026-03-28 01:21:00.330198 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-03-28 01:21:00.330209 | orchestrator | }, 2026-03-28 01:21:00.330217 | orchestrator | "mgr": { 2026-03-28 01:21:00.330227 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-03-28 01:21:00.330235 | orchestrator | }, 2026-03-28 01:21:00.330242 | orchestrator | "osd": { 2026-03-28 01:21:00.330249 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 6 2026-03-28 01:21:00.330256 | orchestrator | }, 2026-03-28 01:21:00.330263 | orchestrator | "mds": { 2026-03-28 01:21:00.330270 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-03-28 01:21:00.330276 | orchestrator | }, 2026-03-28 01:21:00.330283 | orchestrator | "rgw": { 2026-03-28 01:21:00.330290 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-03-28 01:21:00.330317 | orchestrator | }, 2026-03-28 01:21:00.330325 | orchestrator | "overall": { 2026-03-28 01:21:00.330332 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 18 2026-03-28 01:21:00.330339 | orchestrator | } 2026-03-28 01:21:00.330345 | orchestrator | } 2026-03-28 01:21:00.389043 | orchestrator | 2026-03-28 01:21:00.389117 | orchestrator | # Ceph OSD tree 2026-03-28 01:21:00.389123 | orchestrator | 2026-03-28 01:21:00.389128 | orchestrator | + echo 2026-03-28 01:21:00.389134 | orchestrator | + echo '# Ceph OSD tree' 2026-03-28 01:21:00.389139 | orchestrator | + echo 2026-03-28 01:21:00.389144 | orchestrator | + ceph osd df tree 2026-03-28 01:21:00.927237 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2026-03-28 01:21:00.927337 | orchestrator | -1 0.11691 - 120 GiB 7.0 GiB 6.7 GiB 6 KiB 364 MiB 113 GiB 5.86 1.00 - root default 2026-03-28 01:21:00.927347 | orchestrator | -7 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 121 MiB 38 GiB 5.86 1.00 - host testbed-node-3 2026-03-28 01:21:00.927355 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.6 GiB 1.5 GiB 1 KiB 52 MiB 18 GiB 7.77 1.33 197 up osd.1 2026-03-28 01:21:00.927363 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 807 MiB 738 MiB 1 KiB 70 MiB 19 GiB 3.95 0.67 191 up osd.5 2026-03-28 01:21:00.927372 | orchestrator | -5 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 121 MiB 38 GiB 5.86 1.00 - host testbed-node-4 2026-03-28 01:21:00.927386 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.0 GiB 1 KiB 52 MiB 19 GiB 5.42 0.93 190 up osd.0 2026-03-28 01:21:00.927405 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 70 MiB 19 GiB 6.29 1.07 202 up osd.4 2026-03-28 01:21:00.927418 | orchestrator | -3 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 121 MiB 38 GiB 5.86 1.00 - host testbed-node-5 2026-03-28 01:21:00.927431 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 879 MiB 809 MiB 1 KiB 70 MiB 19 GiB 4.30 0.73 196 up osd.2 2026-03-28 01:21:00.927444 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.5 GiB 1.4 GiB 1 KiB 52 MiB 18 GiB 7.42 1.27 194 up osd.3 2026-03-28 01:21:00.927456 | orchestrator | TOTAL 120 GiB 7.0 GiB 6.7 GiB 9.3 KiB 364 MiB 113 GiB 5.86 2026-03-28 01:21:00.927468 | orchestrator | MIN/MAX VAR: 0.67/1.33 STDDEV: 1.45 2026-03-28 01:21:00.990360 | orchestrator | 2026-03-28 01:21:00.990456 | orchestrator | # Ceph monitor status 2026-03-28 01:21:00.990471 | orchestrator | 2026-03-28 01:21:00.990483 | orchestrator | + echo 2026-03-28 01:21:00.990495 | orchestrator | + echo '# Ceph monitor status' 2026-03-28 01:21:00.990507 | orchestrator | + echo 2026-03-28 01:21:00.990518 | orchestrator | + ceph mon stat 2026-03-28 01:21:01.647233 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 8, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2026-03-28 01:21:01.697955 | orchestrator | 2026-03-28 01:21:01.698082 | orchestrator | # Ceph quorum status 2026-03-28 01:21:01.698106 | orchestrator | 2026-03-28 01:21:01.698114 | orchestrator | + echo 2026-03-28 01:21:01.698122 | orchestrator | + echo '# Ceph quorum status' 2026-03-28 01:21:01.698129 | orchestrator | + echo 2026-03-28 01:21:01.698153 | orchestrator | + ceph quorum_status 2026-03-28 01:21:01.698439 | orchestrator | + jq 2026-03-28 01:21:02.372132 | orchestrator | { 2026-03-28 01:21:02.372210 | orchestrator | "election_epoch": 8, 2026-03-28 01:21:02.372219 | orchestrator | "quorum": [ 2026-03-28 01:21:02.372226 | orchestrator | 0, 2026-03-28 01:21:02.372232 | orchestrator | 1, 2026-03-28 01:21:02.372238 | orchestrator | 2 2026-03-28 01:21:02.372244 | orchestrator | ], 2026-03-28 01:21:02.372250 | orchestrator | "quorum_names": [ 2026-03-28 01:21:02.372256 | orchestrator | "testbed-node-0", 2026-03-28 01:21:02.372262 | orchestrator | "testbed-node-1", 2026-03-28 01:21:02.372268 | orchestrator | "testbed-node-2" 2026-03-28 01:21:02.372274 | orchestrator | ], 2026-03-28 01:21:02.372301 | orchestrator | "quorum_leader_name": "testbed-node-0", 2026-03-28 01:21:02.372308 | orchestrator | "quorum_age": 1812, 2026-03-28 01:21:02.372314 | orchestrator | "features": { 2026-03-28 01:21:02.372320 | orchestrator | "quorum_con": "4540138322906710015", 2026-03-28 01:21:02.372326 | orchestrator | "quorum_mon": [ 2026-03-28 01:21:02.372331 | orchestrator | "kraken", 2026-03-28 01:21:02.372337 | orchestrator | "luminous", 2026-03-28 01:21:02.372343 | orchestrator | "mimic", 2026-03-28 01:21:02.372349 | orchestrator | "osdmap-prune", 2026-03-28 01:21:02.372355 | orchestrator | "nautilus", 2026-03-28 01:21:02.372361 | orchestrator | "octopus", 2026-03-28 01:21:02.372367 | orchestrator | "pacific", 2026-03-28 01:21:02.372372 | orchestrator | "elector-pinging", 2026-03-28 01:21:02.372378 | orchestrator | "quincy", 2026-03-28 01:21:02.372384 | orchestrator | "reef" 2026-03-28 01:21:02.372390 | orchestrator | ] 2026-03-28 01:21:02.372395 | orchestrator | }, 2026-03-28 01:21:02.372401 | orchestrator | "monmap": { 2026-03-28 01:21:02.372407 | orchestrator | "epoch": 1, 2026-03-28 01:21:02.372413 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2026-03-28 01:21:02.372419 | orchestrator | "modified": "2026-03-28T00:50:30.843645Z", 2026-03-28 01:21:02.372426 | orchestrator | "created": "2026-03-28T00:50:30.843645Z", 2026-03-28 01:21:02.372432 | orchestrator | "min_mon_release": 18, 2026-03-28 01:21:02.372437 | orchestrator | "min_mon_release_name": "reef", 2026-03-28 01:21:02.372443 | orchestrator | "election_strategy": 1, 2026-03-28 01:21:02.372449 | orchestrator | "disallowed_leaders": "", 2026-03-28 01:21:02.372455 | orchestrator | "stretch_mode": false, 2026-03-28 01:21:02.372461 | orchestrator | "tiebreaker_mon": "", 2026-03-28 01:21:02.372466 | orchestrator | "removed_ranks": "", 2026-03-28 01:21:02.372472 | orchestrator | "features": { 2026-03-28 01:21:02.372478 | orchestrator | "persistent": [ 2026-03-28 01:21:02.372483 | orchestrator | "kraken", 2026-03-28 01:21:02.372489 | orchestrator | "luminous", 2026-03-28 01:21:02.372495 | orchestrator | "mimic", 2026-03-28 01:21:02.372500 | orchestrator | "osdmap-prune", 2026-03-28 01:21:02.372506 | orchestrator | "nautilus", 2026-03-28 01:21:02.372512 | orchestrator | "octopus", 2026-03-28 01:21:02.372518 | orchestrator | "pacific", 2026-03-28 01:21:02.372523 | orchestrator | "elector-pinging", 2026-03-28 01:21:02.372529 | orchestrator | "quincy", 2026-03-28 01:21:02.372535 | orchestrator | "reef" 2026-03-28 01:21:02.372541 | orchestrator | ], 2026-03-28 01:21:02.372546 | orchestrator | "optional": [] 2026-03-28 01:21:02.372552 | orchestrator | }, 2026-03-28 01:21:02.372558 | orchestrator | "mons": [ 2026-03-28 01:21:02.372576 | orchestrator | { 2026-03-28 01:21:02.372582 | orchestrator | "rank": 0, 2026-03-28 01:21:02.372595 | orchestrator | "name": "testbed-node-0", 2026-03-28 01:21:02.372601 | orchestrator | "public_addrs": { 2026-03-28 01:21:02.372607 | orchestrator | "addrvec": [ 2026-03-28 01:21:02.372613 | orchestrator | { 2026-03-28 01:21:02.372618 | orchestrator | "type": "v2", 2026-03-28 01:21:02.372624 | orchestrator | "addr": "192.168.16.10:3300", 2026-03-28 01:21:02.372630 | orchestrator | "nonce": 0 2026-03-28 01:21:02.372636 | orchestrator | }, 2026-03-28 01:21:02.372641 | orchestrator | { 2026-03-28 01:21:02.372647 | orchestrator | "type": "v1", 2026-03-28 01:21:02.372653 | orchestrator | "addr": "192.168.16.10:6789", 2026-03-28 01:21:02.372659 | orchestrator | "nonce": 0 2026-03-28 01:21:02.372664 | orchestrator | } 2026-03-28 01:21:02.372670 | orchestrator | ] 2026-03-28 01:21:02.372676 | orchestrator | }, 2026-03-28 01:21:02.372682 | orchestrator | "addr": "192.168.16.10:6789/0", 2026-03-28 01:21:02.372688 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2026-03-28 01:21:02.372693 | orchestrator | "priority": 0, 2026-03-28 01:21:02.372699 | orchestrator | "weight": 0, 2026-03-28 01:21:02.372705 | orchestrator | "crush_location": "{}" 2026-03-28 01:21:02.372710 | orchestrator | }, 2026-03-28 01:21:02.372716 | orchestrator | { 2026-03-28 01:21:02.372722 | orchestrator | "rank": 1, 2026-03-28 01:21:02.372728 | orchestrator | "name": "testbed-node-1", 2026-03-28 01:21:02.372733 | orchestrator | "public_addrs": { 2026-03-28 01:21:02.372739 | orchestrator | "addrvec": [ 2026-03-28 01:21:02.372745 | orchestrator | { 2026-03-28 01:21:02.372751 | orchestrator | "type": "v2", 2026-03-28 01:21:02.372756 | orchestrator | "addr": "192.168.16.11:3300", 2026-03-28 01:21:02.372762 | orchestrator | "nonce": 0 2026-03-28 01:21:02.372768 | orchestrator | }, 2026-03-28 01:21:02.372775 | orchestrator | { 2026-03-28 01:21:02.372787 | orchestrator | "type": "v1", 2026-03-28 01:21:02.372794 | orchestrator | "addr": "192.168.16.11:6789", 2026-03-28 01:21:02.372801 | orchestrator | "nonce": 0 2026-03-28 01:21:02.372841 | orchestrator | } 2026-03-28 01:21:02.372848 | orchestrator | ] 2026-03-28 01:21:02.372855 | orchestrator | }, 2026-03-28 01:21:02.372862 | orchestrator | "addr": "192.168.16.11:6789/0", 2026-03-28 01:21:02.372869 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2026-03-28 01:21:02.372875 | orchestrator | "priority": 0, 2026-03-28 01:21:02.372882 | orchestrator | "weight": 0, 2026-03-28 01:21:02.372888 | orchestrator | "crush_location": "{}" 2026-03-28 01:21:02.372895 | orchestrator | }, 2026-03-28 01:21:02.372902 | orchestrator | { 2026-03-28 01:21:02.372908 | orchestrator | "rank": 2, 2026-03-28 01:21:02.372915 | orchestrator | "name": "testbed-node-2", 2026-03-28 01:21:02.372921 | orchestrator | "public_addrs": { 2026-03-28 01:21:02.372928 | orchestrator | "addrvec": [ 2026-03-28 01:21:02.372934 | orchestrator | { 2026-03-28 01:21:02.372941 | orchestrator | "type": "v2", 2026-03-28 01:21:02.372947 | orchestrator | "addr": "192.168.16.12:3300", 2026-03-28 01:21:02.372954 | orchestrator | "nonce": 0 2026-03-28 01:21:02.372961 | orchestrator | }, 2026-03-28 01:21:02.372968 | orchestrator | { 2026-03-28 01:21:02.372975 | orchestrator | "type": "v1", 2026-03-28 01:21:02.372985 | orchestrator | "addr": "192.168.16.12:6789", 2026-03-28 01:21:02.372994 | orchestrator | "nonce": 0 2026-03-28 01:21:02.373003 | orchestrator | } 2026-03-28 01:21:02.373015 | orchestrator | ] 2026-03-28 01:21:02.373029 | orchestrator | }, 2026-03-28 01:21:02.373038 | orchestrator | "addr": "192.168.16.12:6789/0", 2026-03-28 01:21:02.373048 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2026-03-28 01:21:02.373057 | orchestrator | "priority": 0, 2026-03-28 01:21:02.373067 | orchestrator | "weight": 0, 2026-03-28 01:21:02.373077 | orchestrator | "crush_location": "{}" 2026-03-28 01:21:02.373086 | orchestrator | } 2026-03-28 01:21:02.373096 | orchestrator | ] 2026-03-28 01:21:02.373106 | orchestrator | } 2026-03-28 01:21:02.373116 | orchestrator | } 2026-03-28 01:21:02.373138 | orchestrator | 2026-03-28 01:21:02.373148 | orchestrator | # Ceph free space status 2026-03-28 01:21:02.373158 | orchestrator | 2026-03-28 01:21:02.373167 | orchestrator | + echo 2026-03-28 01:21:02.373177 | orchestrator | + echo '# Ceph free space status' 2026-03-28 01:21:02.373183 | orchestrator | + echo 2026-03-28 01:21:02.373189 | orchestrator | + ceph df 2026-03-28 01:21:02.964932 | orchestrator | --- RAW STORAGE --- 2026-03-28 01:21:02.965013 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2026-03-28 01:21:02.965075 | orchestrator | hdd 120 GiB 113 GiB 7.0 GiB 7.0 GiB 5.86 2026-03-28 01:21:02.965084 | orchestrator | TOTAL 120 GiB 113 GiB 7.0 GiB 7.0 GiB 5.86 2026-03-28 01:21:02.965091 | orchestrator | 2026-03-28 01:21:02.965098 | orchestrator | --- POOLS --- 2026-03-28 01:21:02.965106 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2026-03-28 01:21:02.965115 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 52 GiB 2026-03-28 01:21:02.965119 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2026-03-28 01:21:02.965123 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2026-03-28 01:21:02.965127 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2026-03-28 01:21:02.965131 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2026-03-28 01:21:02.965135 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2026-03-28 01:21:02.965139 | orchestrator | default.rgw.log 7 32 3.6 KiB 209 408 KiB 0 35 GiB 2026-03-28 01:21:02.965143 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2026-03-28 01:21:02.965147 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 52 GiB 2026-03-28 01:21:02.965151 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2026-03-28 01:21:02.965155 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2026-03-28 01:21:02.965158 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.98 35 GiB 2026-03-28 01:21:02.965162 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2026-03-28 01:21:02.965186 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2026-03-28 01:21:03.034243 | orchestrator | ++ semver latest 5.0.0 2026-03-28 01:21:03.082900 | orchestrator | + [[ -1 -eq -1 ]] 2026-03-28 01:21:03.083003 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-28 01:21:03.083014 | orchestrator | + osism apply facts 2026-03-28 01:21:04.485654 | orchestrator | 2026-03-28 01:21:04 | INFO  | Prepare task for execution of facts. 2026-03-28 01:21:04.571574 | orchestrator | 2026-03-28 01:21:04 | INFO  | Task 3e31e63e-0a3c-4bd7-bd4f-7421017b1956 (facts) was prepared for execution. 2026-03-28 01:21:04.571697 | orchestrator | 2026-03-28 01:21:04 | INFO  | It takes a moment until task 3e31e63e-0a3c-4bd7-bd4f-7421017b1956 (facts) has been started and output is visible here. 2026-03-28 01:21:18.442641 | orchestrator | 2026-03-28 01:21:18.442735 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-28 01:21:18.442746 | orchestrator | 2026-03-28 01:21:18.442755 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-28 01:21:18.442763 | orchestrator | Saturday 28 March 2026 01:21:08 +0000 (0:00:00.391) 0:00:00.391 ******** 2026-03-28 01:21:18.442771 | orchestrator | ok: [testbed-manager] 2026-03-28 01:21:18.442780 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:21:18.442787 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:21:18.442795 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:21:18.442832 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:21:18.442842 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:21:18.442849 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:21:18.442855 | orchestrator | 2026-03-28 01:21:18.442862 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-28 01:21:18.442868 | orchestrator | Saturday 28 March 2026 01:21:09 +0000 (0:00:01.467) 0:00:01.859 ******** 2026-03-28 01:21:18.442875 | orchestrator | skipping: [testbed-manager] 2026-03-28 01:21:18.442882 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:21:18.442888 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:21:18.442895 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:21:18.442901 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:21:18.442908 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:21:18.442918 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:21:18.442927 | orchestrator | 2026-03-28 01:21:18.442937 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-28 01:21:18.442947 | orchestrator | 2026-03-28 01:21:18.442957 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-28 01:21:18.442967 | orchestrator | Saturday 28 March 2026 01:21:11 +0000 (0:00:01.373) 0:00:03.233 ******** 2026-03-28 01:21:18.442977 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:21:18.442986 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:21:18.442996 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:21:18.443007 | orchestrator | ok: [testbed-manager] 2026-03-28 01:21:18.443016 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:21:18.443026 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:21:18.443037 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:21:18.443047 | orchestrator | 2026-03-28 01:21:18.443057 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-28 01:21:18.443067 | orchestrator | 2026-03-28 01:21:18.443074 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-28 01:21:18.443081 | orchestrator | Saturday 28 March 2026 01:21:17 +0000 (0:00:06.121) 0:00:09.354 ******** 2026-03-28 01:21:18.443087 | orchestrator | skipping: [testbed-manager] 2026-03-28 01:21:18.443093 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:21:18.443100 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:21:18.443106 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:21:18.443112 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:21:18.443119 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:21:18.443125 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:21:18.443132 | orchestrator | 2026-03-28 01:21:18.443161 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:21:18.443174 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 01:21:18.443186 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 01:21:18.443197 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 01:21:18.443208 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 01:21:18.443217 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 01:21:18.443228 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 01:21:18.443236 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 01:21:18.443242 | orchestrator | 2026-03-28 01:21:18.443249 | orchestrator | 2026-03-28 01:21:18.443259 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:21:18.443269 | orchestrator | Saturday 28 March 2026 01:21:18 +0000 (0:00:00.829) 0:00:10.184 ******** 2026-03-28 01:21:18.443287 | orchestrator | =============================================================================== 2026-03-28 01:21:18.443298 | orchestrator | Gathers facts about hosts ----------------------------------------------- 6.12s 2026-03-28 01:21:18.443308 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.47s 2026-03-28 01:21:18.443317 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.37s 2026-03-28 01:21:18.443326 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.83s 2026-03-28 01:21:18.667125 | orchestrator | + osism validate ceph-mons 2026-03-28 01:21:51.678950 | orchestrator | 2026-03-28 01:21:51.679094 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2026-03-28 01:21:51.679123 | orchestrator | 2026-03-28 01:21:51.679142 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-03-28 01:21:51.679161 | orchestrator | Saturday 28 March 2026 01:21:34 +0000 (0:00:00.593) 0:00:00.593 ******** 2026-03-28 01:21:51.679181 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-28 01:21:51.679202 | orchestrator | 2026-03-28 01:21:51.679221 | orchestrator | TASK [Create report output directory] ****************************************** 2026-03-28 01:21:51.679243 | orchestrator | Saturday 28 March 2026 01:21:35 +0000 (0:00:01.082) 0:00:01.676 ******** 2026-03-28 01:21:51.679264 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-28 01:21:51.679284 | orchestrator | 2026-03-28 01:21:51.679303 | orchestrator | TASK [Define report vars] ****************************************************** 2026-03-28 01:21:51.679314 | orchestrator | Saturday 28 March 2026 01:21:36 +0000 (0:00:00.761) 0:00:02.437 ******** 2026-03-28 01:21:51.679326 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:21:51.679338 | orchestrator | 2026-03-28 01:21:51.679348 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-03-28 01:21:51.679359 | orchestrator | Saturday 28 March 2026 01:21:36 +0000 (0:00:00.120) 0:00:02.558 ******** 2026-03-28 01:21:51.679370 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:21:51.679381 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:21:51.679394 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:21:51.679408 | orchestrator | 2026-03-28 01:21:51.679420 | orchestrator | TASK [Get container info] ****************************************************** 2026-03-28 01:21:51.679434 | orchestrator | Saturday 28 March 2026 01:21:36 +0000 (0:00:00.326) 0:00:02.885 ******** 2026-03-28 01:21:51.679504 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:21:51.679519 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:21:51.679531 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:21:51.679543 | orchestrator | 2026-03-28 01:21:51.679555 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-03-28 01:21:51.679568 | orchestrator | Saturday 28 March 2026 01:21:38 +0000 (0:00:01.635) 0:00:04.521 ******** 2026-03-28 01:21:51.679581 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:21:51.679594 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:21:51.679606 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:21:51.679618 | orchestrator | 2026-03-28 01:21:51.679631 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-03-28 01:21:51.679643 | orchestrator | Saturday 28 March 2026 01:21:38 +0000 (0:00:00.318) 0:00:04.839 ******** 2026-03-28 01:21:51.679656 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:21:51.679668 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:21:51.679739 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:21:51.679752 | orchestrator | 2026-03-28 01:21:51.679765 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-28 01:21:51.679802 | orchestrator | Saturday 28 March 2026 01:21:38 +0000 (0:00:00.346) 0:00:05.186 ******** 2026-03-28 01:21:51.679814 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:21:51.679825 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:21:51.679836 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:21:51.679846 | orchestrator | 2026-03-28 01:21:51.679857 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2026-03-28 01:21:51.679868 | orchestrator | Saturday 28 March 2026 01:21:39 +0000 (0:00:00.377) 0:00:05.563 ******** 2026-03-28 01:21:51.679879 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:21:51.679890 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:21:51.679900 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:21:51.679911 | orchestrator | 2026-03-28 01:21:51.679922 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2026-03-28 01:21:51.679933 | orchestrator | Saturday 28 March 2026 01:21:39 +0000 (0:00:00.527) 0:00:06.090 ******** 2026-03-28 01:21:51.679943 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:21:51.679954 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:21:51.679965 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:21:51.679976 | orchestrator | 2026-03-28 01:21:51.679987 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-28 01:21:51.679998 | orchestrator | Saturday 28 March 2026 01:21:40 +0000 (0:00:00.304) 0:00:06.395 ******** 2026-03-28 01:21:51.680009 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:21:51.680020 | orchestrator | 2026-03-28 01:21:51.680031 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-28 01:21:51.680042 | orchestrator | Saturday 28 March 2026 01:21:40 +0000 (0:00:00.271) 0:00:06.667 ******** 2026-03-28 01:21:51.680052 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:21:51.680063 | orchestrator | 2026-03-28 01:21:51.680074 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-28 01:21:51.680085 | orchestrator | Saturday 28 March 2026 01:21:40 +0000 (0:00:00.313) 0:00:06.981 ******** 2026-03-28 01:21:51.680096 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:21:51.680107 | orchestrator | 2026-03-28 01:21:51.680118 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 01:21:51.680129 | orchestrator | Saturday 28 March 2026 01:21:40 +0000 (0:00:00.304) 0:00:07.285 ******** 2026-03-28 01:21:51.680139 | orchestrator | 2026-03-28 01:21:51.680150 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 01:21:51.680161 | orchestrator | Saturday 28 March 2026 01:21:40 +0000 (0:00:00.078) 0:00:07.364 ******** 2026-03-28 01:21:51.680171 | orchestrator | 2026-03-28 01:21:51.680182 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 01:21:51.680193 | orchestrator | Saturday 28 March 2026 01:21:41 +0000 (0:00:00.101) 0:00:07.465 ******** 2026-03-28 01:21:51.680204 | orchestrator | 2026-03-28 01:21:51.680227 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-28 01:21:51.680238 | orchestrator | Saturday 28 March 2026 01:21:41 +0000 (0:00:00.270) 0:00:07.735 ******** 2026-03-28 01:21:51.680248 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:21:51.680259 | orchestrator | 2026-03-28 01:21:51.680270 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-03-28 01:21:51.680281 | orchestrator | Saturday 28 March 2026 01:21:41 +0000 (0:00:00.269) 0:00:08.005 ******** 2026-03-28 01:21:51.680292 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:21:51.680303 | orchestrator | 2026-03-28 01:21:51.680341 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2026-03-28 01:21:51.680361 | orchestrator | Saturday 28 March 2026 01:21:41 +0000 (0:00:00.323) 0:00:08.329 ******** 2026-03-28 01:21:51.680379 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:21:51.680397 | orchestrator | 2026-03-28 01:21:51.680415 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2026-03-28 01:21:51.680434 | orchestrator | Saturday 28 March 2026 01:21:42 +0000 (0:00:00.134) 0:00:08.463 ******** 2026-03-28 01:21:51.680452 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:21:51.680470 | orchestrator | 2026-03-28 01:21:51.680489 | orchestrator | TASK [Set quorum test data] **************************************************** 2026-03-28 01:21:51.680506 | orchestrator | Saturday 28 March 2026 01:21:43 +0000 (0:00:01.797) 0:00:10.261 ******** 2026-03-28 01:21:51.680522 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:21:51.680533 | orchestrator | 2026-03-28 01:21:51.680543 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2026-03-28 01:21:51.680554 | orchestrator | Saturday 28 March 2026 01:21:44 +0000 (0:00:00.371) 0:00:10.632 ******** 2026-03-28 01:21:51.680564 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:21:51.680575 | orchestrator | 2026-03-28 01:21:51.680586 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2026-03-28 01:21:51.680597 | orchestrator | Saturday 28 March 2026 01:21:44 +0000 (0:00:00.133) 0:00:10.766 ******** 2026-03-28 01:21:51.680615 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:21:51.680641 | orchestrator | 2026-03-28 01:21:51.680662 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2026-03-28 01:21:51.680678 | orchestrator | Saturday 28 March 2026 01:21:44 +0000 (0:00:00.345) 0:00:11.111 ******** 2026-03-28 01:21:51.680696 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:21:51.680712 | orchestrator | 2026-03-28 01:21:51.680727 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2026-03-28 01:21:51.680764 | orchestrator | Saturday 28 March 2026 01:21:45 +0000 (0:00:00.319) 0:00:11.431 ******** 2026-03-28 01:21:51.680863 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:21:51.680884 | orchestrator | 2026-03-28 01:21:51.680901 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2026-03-28 01:21:51.680918 | orchestrator | Saturday 28 March 2026 01:21:45 +0000 (0:00:00.121) 0:00:11.552 ******** 2026-03-28 01:21:51.680929 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:21:51.680940 | orchestrator | 2026-03-28 01:21:51.680951 | orchestrator | TASK [Prepare status test vars] ************************************************ 2026-03-28 01:21:51.680962 | orchestrator | Saturday 28 March 2026 01:21:45 +0000 (0:00:00.142) 0:00:11.695 ******** 2026-03-28 01:21:51.680972 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:21:51.680983 | orchestrator | 2026-03-28 01:21:51.680994 | orchestrator | TASK [Gather status data] ****************************************************** 2026-03-28 01:21:51.681004 | orchestrator | Saturday 28 March 2026 01:21:45 +0000 (0:00:00.308) 0:00:12.003 ******** 2026-03-28 01:21:51.681015 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:21:51.681026 | orchestrator | 2026-03-28 01:21:51.681037 | orchestrator | TASK [Set health test data] **************************************************** 2026-03-28 01:21:51.681047 | orchestrator | Saturday 28 March 2026 01:21:47 +0000 (0:00:01.564) 0:00:13.568 ******** 2026-03-28 01:21:51.681058 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:21:51.681068 | orchestrator | 2026-03-28 01:21:51.681091 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2026-03-28 01:21:51.681102 | orchestrator | Saturday 28 March 2026 01:21:47 +0000 (0:00:00.345) 0:00:13.913 ******** 2026-03-28 01:21:51.681112 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:21:51.681123 | orchestrator | 2026-03-28 01:21:51.681134 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2026-03-28 01:21:51.681144 | orchestrator | Saturday 28 March 2026 01:21:47 +0000 (0:00:00.172) 0:00:14.086 ******** 2026-03-28 01:21:51.681155 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:21:51.681166 | orchestrator | 2026-03-28 01:21:51.681176 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2026-03-28 01:21:51.681187 | orchestrator | Saturday 28 March 2026 01:21:47 +0000 (0:00:00.155) 0:00:14.241 ******** 2026-03-28 01:21:51.681198 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:21:51.681209 | orchestrator | 2026-03-28 01:21:51.681219 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2026-03-28 01:21:51.681230 | orchestrator | Saturday 28 March 2026 01:21:47 +0000 (0:00:00.146) 0:00:14.387 ******** 2026-03-28 01:21:51.681241 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:21:51.681257 | orchestrator | 2026-03-28 01:21:51.681268 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-03-28 01:21:51.681279 | orchestrator | Saturday 28 March 2026 01:21:48 +0000 (0:00:00.143) 0:00:14.531 ******** 2026-03-28 01:21:51.681290 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-28 01:21:51.681301 | orchestrator | 2026-03-28 01:21:51.681311 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-03-28 01:21:51.681322 | orchestrator | Saturday 28 March 2026 01:21:48 +0000 (0:00:00.280) 0:00:14.811 ******** 2026-03-28 01:21:51.681333 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:21:51.681344 | orchestrator | 2026-03-28 01:21:51.681354 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-28 01:21:51.681365 | orchestrator | Saturday 28 March 2026 01:21:48 +0000 (0:00:00.254) 0:00:15.066 ******** 2026-03-28 01:21:51.681376 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-28 01:21:51.681387 | orchestrator | 2026-03-28 01:21:51.681398 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-28 01:21:51.681416 | orchestrator | Saturday 28 March 2026 01:21:50 +0000 (0:00:02.020) 0:00:17.087 ******** 2026-03-28 01:21:51.681427 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-28 01:21:51.681438 | orchestrator | 2026-03-28 01:21:51.681449 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-28 01:21:51.681459 | orchestrator | Saturday 28 March 2026 01:21:50 +0000 (0:00:00.272) 0:00:17.359 ******** 2026-03-28 01:21:51.681470 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-28 01:21:51.681481 | orchestrator | 2026-03-28 01:21:51.681504 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 01:21:54.256233 | orchestrator | Saturday 28 March 2026 01:21:51 +0000 (0:00:00.701) 0:00:18.061 ******** 2026-03-28 01:21:54.256360 | orchestrator | 2026-03-28 01:21:54.256385 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 01:21:54.256403 | orchestrator | Saturday 28 March 2026 01:21:51 +0000 (0:00:00.085) 0:00:18.147 ******** 2026-03-28 01:21:54.256419 | orchestrator | 2026-03-28 01:21:54.256435 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 01:21:54.256451 | orchestrator | Saturday 28 March 2026 01:21:51 +0000 (0:00:00.076) 0:00:18.223 ******** 2026-03-28 01:21:54.256467 | orchestrator | 2026-03-28 01:21:54.256483 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-03-28 01:21:54.256499 | orchestrator | Saturday 28 March 2026 01:21:51 +0000 (0:00:00.086) 0:00:18.310 ******** 2026-03-28 01:21:54.256515 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-28 01:21:54.256531 | orchestrator | 2026-03-28 01:21:54.256546 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-28 01:21:54.256599 | orchestrator | Saturday 28 March 2026 01:21:53 +0000 (0:00:01.481) 0:00:19.791 ******** 2026-03-28 01:21:54.256617 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-03-28 01:21:54.256633 | orchestrator |  "msg": [ 2026-03-28 01:21:54.256650 | orchestrator |  "Validator run completed.", 2026-03-28 01:21:54.256666 | orchestrator |  "You can find the report file here:", 2026-03-28 01:21:54.256683 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2026-03-28T01:21:35+00:00-report.json", 2026-03-28 01:21:54.256701 | orchestrator |  "on the following host:", 2026-03-28 01:21:54.256718 | orchestrator |  "testbed-manager" 2026-03-28 01:21:54.256733 | orchestrator |  ] 2026-03-28 01:21:54.256756 | orchestrator | } 2026-03-28 01:21:54.256808 | orchestrator | 2026-03-28 01:21:54.256828 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:21:54.256849 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-03-28 01:21:54.256870 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 01:21:54.256905 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 01:21:54.256921 | orchestrator | 2026-03-28 01:21:54.256939 | orchestrator | 2026-03-28 01:21:54.256957 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:21:54.256975 | orchestrator | Saturday 28 March 2026 01:21:53 +0000 (0:00:00.472) 0:00:20.264 ******** 2026-03-28 01:21:54.256991 | orchestrator | =============================================================================== 2026-03-28 01:21:54.257008 | orchestrator | Aggregate test results step one ----------------------------------------- 2.02s 2026-03-28 01:21:54.257024 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.80s 2026-03-28 01:21:54.257039 | orchestrator | Get container info ------------------------------------------------------ 1.64s 2026-03-28 01:21:54.257055 | orchestrator | Gather status data ------------------------------------------------------ 1.56s 2026-03-28 01:21:54.257072 | orchestrator | Write report file ------------------------------------------------------- 1.48s 2026-03-28 01:21:54.257088 | orchestrator | Get timestamp for report file ------------------------------------------- 1.08s 2026-03-28 01:21:54.257106 | orchestrator | Create report output directory ------------------------------------------ 0.76s 2026-03-28 01:21:54.257122 | orchestrator | Aggregate test results step three --------------------------------------- 0.70s 2026-03-28 01:21:54.257138 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.53s 2026-03-28 01:21:54.257153 | orchestrator | Print report file information ------------------------------------------- 0.47s 2026-03-28 01:21:54.257168 | orchestrator | Flush handlers ---------------------------------------------------------- 0.45s 2026-03-28 01:21:54.257184 | orchestrator | Prepare test data ------------------------------------------------------- 0.38s 2026-03-28 01:21:54.257198 | orchestrator | Set quorum test data ---------------------------------------------------- 0.37s 2026-03-28 01:21:54.257213 | orchestrator | Set test result to passed if container is existing ---------------------- 0.35s 2026-03-28 01:21:54.257229 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.35s 2026-03-28 01:21:54.257244 | orchestrator | Set health test data ---------------------------------------------------- 0.35s 2026-03-28 01:21:54.257260 | orchestrator | Prepare test data for container existance test -------------------------- 0.33s 2026-03-28 01:21:54.257276 | orchestrator | Fail due to missing containers ------------------------------------------ 0.32s 2026-03-28 01:21:54.257293 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.32s 2026-03-28 01:21:54.257309 | orchestrator | Set test result to failed if container is missing ----------------------- 0.32s 2026-03-28 01:21:54.490368 | orchestrator | + osism validate ceph-mgrs 2026-03-28 01:22:25.422827 | orchestrator | 2026-03-28 01:22:25.422936 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2026-03-28 01:22:25.422945 | orchestrator | 2026-03-28 01:22:25.422950 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-03-28 01:22:25.422955 | orchestrator | Saturday 28 March 2026 01:22:09 +0000 (0:00:00.603) 0:00:00.603 ******** 2026-03-28 01:22:25.422959 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-28 01:22:25.422963 | orchestrator | 2026-03-28 01:22:25.422967 | orchestrator | TASK [Create report output directory] ****************************************** 2026-03-28 01:22:25.422971 | orchestrator | Saturday 28 March 2026 01:22:11 +0000 (0:00:01.127) 0:00:01.730 ******** 2026-03-28 01:22:25.422975 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-28 01:22:25.422979 | orchestrator | 2026-03-28 01:22:25.422983 | orchestrator | TASK [Define report vars] ****************************************************** 2026-03-28 01:22:25.422986 | orchestrator | Saturday 28 March 2026 01:22:11 +0000 (0:00:00.799) 0:00:02.530 ******** 2026-03-28 01:22:25.422990 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:22:25.422995 | orchestrator | 2026-03-28 01:22:25.422999 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-03-28 01:22:25.423003 | orchestrator | Saturday 28 March 2026 01:22:12 +0000 (0:00:00.175) 0:00:02.705 ******** 2026-03-28 01:22:25.423006 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:22:25.423010 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:22:25.423014 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:22:25.423018 | orchestrator | 2026-03-28 01:22:25.423021 | orchestrator | TASK [Get container info] ****************************************************** 2026-03-28 01:22:25.423025 | orchestrator | Saturday 28 March 2026 01:22:12 +0000 (0:00:00.307) 0:00:03.012 ******** 2026-03-28 01:22:25.423029 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:22:25.423033 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:22:25.423036 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:22:25.423040 | orchestrator | 2026-03-28 01:22:25.423044 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-03-28 01:22:25.423047 | orchestrator | Saturday 28 March 2026 01:22:13 +0000 (0:00:01.499) 0:00:04.512 ******** 2026-03-28 01:22:25.423052 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:22:25.423055 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:22:25.423059 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:22:25.423063 | orchestrator | 2026-03-28 01:22:25.423067 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-03-28 01:22:25.423070 | orchestrator | Saturday 28 March 2026 01:22:14 +0000 (0:00:00.315) 0:00:04.828 ******** 2026-03-28 01:22:25.423074 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:22:25.423078 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:22:25.423082 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:22:25.423085 | orchestrator | 2026-03-28 01:22:25.423089 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-28 01:22:25.423093 | orchestrator | Saturday 28 March 2026 01:22:14 +0000 (0:00:00.337) 0:00:05.166 ******** 2026-03-28 01:22:25.423097 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:22:25.423100 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:22:25.423104 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:22:25.423108 | orchestrator | 2026-03-28 01:22:25.423112 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2026-03-28 01:22:25.423115 | orchestrator | Saturday 28 March 2026 01:22:14 +0000 (0:00:00.416) 0:00:05.582 ******** 2026-03-28 01:22:25.423119 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:22:25.423123 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:22:25.423127 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:22:25.423130 | orchestrator | 2026-03-28 01:22:25.423134 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2026-03-28 01:22:25.423138 | orchestrator | Saturday 28 March 2026 01:22:15 +0000 (0:00:00.542) 0:00:06.125 ******** 2026-03-28 01:22:25.423155 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:22:25.423160 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:22:25.423164 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:22:25.423167 | orchestrator | 2026-03-28 01:22:25.423171 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-28 01:22:25.423175 | orchestrator | Saturday 28 March 2026 01:22:15 +0000 (0:00:00.317) 0:00:06.442 ******** 2026-03-28 01:22:25.423179 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:22:25.423182 | orchestrator | 2026-03-28 01:22:25.423186 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-28 01:22:25.423190 | orchestrator | Saturday 28 March 2026 01:22:16 +0000 (0:00:00.280) 0:00:06.723 ******** 2026-03-28 01:22:25.423194 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:22:25.423197 | orchestrator | 2026-03-28 01:22:25.423201 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-28 01:22:25.423205 | orchestrator | Saturday 28 March 2026 01:22:16 +0000 (0:00:00.285) 0:00:07.009 ******** 2026-03-28 01:22:25.423209 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:22:25.423212 | orchestrator | 2026-03-28 01:22:25.423216 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 01:22:25.423220 | orchestrator | Saturday 28 March 2026 01:22:16 +0000 (0:00:00.271) 0:00:07.280 ******** 2026-03-28 01:22:25.423223 | orchestrator | 2026-03-28 01:22:25.423227 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 01:22:25.423231 | orchestrator | Saturday 28 March 2026 01:22:16 +0000 (0:00:00.077) 0:00:07.357 ******** 2026-03-28 01:22:25.423234 | orchestrator | 2026-03-28 01:22:25.423238 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 01:22:25.423242 | orchestrator | Saturday 28 March 2026 01:22:16 +0000 (0:00:00.073) 0:00:07.431 ******** 2026-03-28 01:22:25.423246 | orchestrator | 2026-03-28 01:22:25.423249 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-28 01:22:25.423253 | orchestrator | Saturday 28 March 2026 01:22:17 +0000 (0:00:00.262) 0:00:07.693 ******** 2026-03-28 01:22:25.423257 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:22:25.423261 | orchestrator | 2026-03-28 01:22:25.423264 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-03-28 01:22:25.423268 | orchestrator | Saturday 28 March 2026 01:22:17 +0000 (0:00:00.273) 0:00:07.967 ******** 2026-03-28 01:22:25.423272 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:22:25.423275 | orchestrator | 2026-03-28 01:22:25.423301 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2026-03-28 01:22:25.423305 | orchestrator | Saturday 28 March 2026 01:22:17 +0000 (0:00:00.272) 0:00:08.239 ******** 2026-03-28 01:22:25.423309 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:22:25.423313 | orchestrator | 2026-03-28 01:22:25.423317 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2026-03-28 01:22:25.423321 | orchestrator | Saturday 28 March 2026 01:22:17 +0000 (0:00:00.115) 0:00:08.355 ******** 2026-03-28 01:22:25.423325 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:22:25.423329 | orchestrator | 2026-03-28 01:22:25.423333 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2026-03-28 01:22:25.423337 | orchestrator | Saturday 28 March 2026 01:22:19 +0000 (0:00:01.963) 0:00:10.319 ******** 2026-03-28 01:22:25.423341 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:22:25.423346 | orchestrator | 2026-03-28 01:22:25.423350 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2026-03-28 01:22:25.423354 | orchestrator | Saturday 28 March 2026 01:22:19 +0000 (0:00:00.263) 0:00:10.582 ******** 2026-03-28 01:22:25.423359 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:22:25.423363 | orchestrator | 2026-03-28 01:22:25.423367 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2026-03-28 01:22:25.423371 | orchestrator | Saturday 28 March 2026 01:22:20 +0000 (0:00:00.353) 0:00:10.936 ******** 2026-03-28 01:22:25.423375 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:22:25.423384 | orchestrator | 2026-03-28 01:22:25.423388 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2026-03-28 01:22:25.423393 | orchestrator | Saturday 28 March 2026 01:22:20 +0000 (0:00:00.147) 0:00:11.083 ******** 2026-03-28 01:22:25.423397 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:22:25.423401 | orchestrator | 2026-03-28 01:22:25.423405 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-03-28 01:22:25.423410 | orchestrator | Saturday 28 March 2026 01:22:20 +0000 (0:00:00.148) 0:00:11.232 ******** 2026-03-28 01:22:25.423414 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-28 01:22:25.423419 | orchestrator | 2026-03-28 01:22:25.423423 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-03-28 01:22:25.423427 | orchestrator | Saturday 28 March 2026 01:22:20 +0000 (0:00:00.259) 0:00:11.491 ******** 2026-03-28 01:22:25.423432 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:22:25.423436 | orchestrator | 2026-03-28 01:22:25.423440 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-28 01:22:25.423444 | orchestrator | Saturday 28 March 2026 01:22:21 +0000 (0:00:00.255) 0:00:11.746 ******** 2026-03-28 01:22:25.423448 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-28 01:22:25.423451 | orchestrator | 2026-03-28 01:22:25.423455 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-28 01:22:25.423459 | orchestrator | Saturday 28 March 2026 01:22:22 +0000 (0:00:01.663) 0:00:13.410 ******** 2026-03-28 01:22:25.423463 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-28 01:22:25.423466 | orchestrator | 2026-03-28 01:22:25.423470 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-28 01:22:25.423474 | orchestrator | Saturday 28 March 2026 01:22:23 +0000 (0:00:00.315) 0:00:13.725 ******** 2026-03-28 01:22:25.423478 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-28 01:22:25.423482 | orchestrator | 2026-03-28 01:22:25.423485 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 01:22:25.423489 | orchestrator | Saturday 28 March 2026 01:22:23 +0000 (0:00:00.275) 0:00:14.000 ******** 2026-03-28 01:22:25.423493 | orchestrator | 2026-03-28 01:22:25.423497 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 01:22:25.423500 | orchestrator | Saturday 28 March 2026 01:22:23 +0000 (0:00:00.072) 0:00:14.072 ******** 2026-03-28 01:22:25.423504 | orchestrator | 2026-03-28 01:22:25.423508 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 01:22:25.423511 | orchestrator | Saturday 28 March 2026 01:22:23 +0000 (0:00:00.081) 0:00:14.154 ******** 2026-03-28 01:22:25.423515 | orchestrator | 2026-03-28 01:22:25.423519 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-03-28 01:22:25.423523 | orchestrator | Saturday 28 March 2026 01:22:23 +0000 (0:00:00.076) 0:00:14.230 ******** 2026-03-28 01:22:25.423526 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-28 01:22:25.423530 | orchestrator | 2026-03-28 01:22:25.423534 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-28 01:22:25.423538 | orchestrator | Saturday 28 March 2026 01:22:25 +0000 (0:00:01.412) 0:00:15.643 ******** 2026-03-28 01:22:25.423541 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-03-28 01:22:25.423545 | orchestrator |  "msg": [ 2026-03-28 01:22:25.423549 | orchestrator |  "Validator run completed.", 2026-03-28 01:22:25.423553 | orchestrator |  "You can find the report file here:", 2026-03-28 01:22:25.423557 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2026-03-28T01:22:10+00:00-report.json", 2026-03-28 01:22:25.423561 | orchestrator |  "on the following host:", 2026-03-28 01:22:25.423565 | orchestrator |  "testbed-manager" 2026-03-28 01:22:25.423568 | orchestrator |  ] 2026-03-28 01:22:25.423572 | orchestrator | } 2026-03-28 01:22:25.423576 | orchestrator | 2026-03-28 01:22:25.423580 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:22:25.423590 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-28 01:22:25.423595 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 01:22:25.423605 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 01:22:25.794069 | orchestrator | 2026-03-28 01:22:25.794149 | orchestrator | 2026-03-28 01:22:25.794157 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:22:25.794174 | orchestrator | Saturday 28 March 2026 01:22:25 +0000 (0:00:00.412) 0:00:16.055 ******** 2026-03-28 01:22:25.794179 | orchestrator | =============================================================================== 2026-03-28 01:22:25.794183 | orchestrator | Gather list of mgr modules ---------------------------------------------- 1.96s 2026-03-28 01:22:25.794187 | orchestrator | Aggregate test results step one ----------------------------------------- 1.66s 2026-03-28 01:22:25.794191 | orchestrator | Get container info ------------------------------------------------------ 1.50s 2026-03-28 01:22:25.794196 | orchestrator | Write report file ------------------------------------------------------- 1.41s 2026-03-28 01:22:25.794199 | orchestrator | Get timestamp for report file ------------------------------------------- 1.13s 2026-03-28 01:22:25.794203 | orchestrator | Create report output directory ------------------------------------------ 0.80s 2026-03-28 01:22:25.794207 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.54s 2026-03-28 01:22:25.794212 | orchestrator | Prepare test data ------------------------------------------------------- 0.42s 2026-03-28 01:22:25.794215 | orchestrator | Flush handlers ---------------------------------------------------------- 0.41s 2026-03-28 01:22:25.794219 | orchestrator | Print report file information ------------------------------------------- 0.41s 2026-03-28 01:22:25.794223 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.35s 2026-03-28 01:22:25.794227 | orchestrator | Set test result to passed if container is existing ---------------------- 0.34s 2026-03-28 01:22:25.794231 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.32s 2026-03-28 01:22:25.794234 | orchestrator | Set test result to failed if container is missing ----------------------- 0.32s 2026-03-28 01:22:25.794238 | orchestrator | Aggregate test results step two ----------------------------------------- 0.32s 2026-03-28 01:22:25.794242 | orchestrator | Prepare test data for container existance test -------------------------- 0.31s 2026-03-28 01:22:25.794246 | orchestrator | Aggregate test results step two ----------------------------------------- 0.29s 2026-03-28 01:22:25.794249 | orchestrator | Aggregate test results step one ----------------------------------------- 0.28s 2026-03-28 01:22:25.794253 | orchestrator | Aggregate test results step three --------------------------------------- 0.28s 2026-03-28 01:22:25.794257 | orchestrator | Print report file information ------------------------------------------- 0.27s 2026-03-28 01:22:26.020613 | orchestrator | + osism validate ceph-osds 2026-03-28 01:22:45.896506 | orchestrator | 2026-03-28 01:22:45.896734 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2026-03-28 01:22:45.896784 | orchestrator | 2026-03-28 01:22:45.896799 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-03-28 01:22:45.896813 | orchestrator | Saturday 28 March 2026 01:22:41 +0000 (0:00:00.603) 0:00:00.603 ******** 2026-03-28 01:22:45.896826 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-28 01:22:45.896844 | orchestrator | 2026-03-28 01:22:45.896862 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-28 01:22:45.896881 | orchestrator | Saturday 28 March 2026 01:22:42 +0000 (0:00:01.132) 0:00:01.735 ******** 2026-03-28 01:22:45.896899 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-28 01:22:45.896917 | orchestrator | 2026-03-28 01:22:45.896965 | orchestrator | TASK [Create report output directory] ****************************************** 2026-03-28 01:22:45.896985 | orchestrator | Saturday 28 March 2026 01:22:42 +0000 (0:00:00.255) 0:00:01.991 ******** 2026-03-28 01:22:45.897002 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-28 01:22:45.897018 | orchestrator | 2026-03-28 01:22:45.897033 | orchestrator | TASK [Define report vars] ****************************************************** 2026-03-28 01:22:45.897049 | orchestrator | Saturday 28 March 2026 01:22:43 +0000 (0:00:00.781) 0:00:02.772 ******** 2026-03-28 01:22:45.897067 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:22:45.897086 | orchestrator | 2026-03-28 01:22:45.897103 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-03-28 01:22:45.897122 | orchestrator | Saturday 28 March 2026 01:22:43 +0000 (0:00:00.128) 0:00:02.901 ******** 2026-03-28 01:22:45.897140 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:22:45.897156 | orchestrator | 2026-03-28 01:22:45.897172 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-03-28 01:22:45.897189 | orchestrator | Saturday 28 March 2026 01:22:43 +0000 (0:00:00.139) 0:00:03.040 ******** 2026-03-28 01:22:45.897207 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:22:45.897226 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:22:45.897243 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:22:45.897261 | orchestrator | 2026-03-28 01:22:45.897279 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-03-28 01:22:45.897296 | orchestrator | Saturday 28 March 2026 01:22:44 +0000 (0:00:00.480) 0:00:03.521 ******** 2026-03-28 01:22:45.897313 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:22:45.897331 | orchestrator | 2026-03-28 01:22:45.897348 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-03-28 01:22:45.897366 | orchestrator | Saturday 28 March 2026 01:22:44 +0000 (0:00:00.184) 0:00:03.706 ******** 2026-03-28 01:22:45.897384 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:22:45.897403 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:22:45.897415 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:22:45.897427 | orchestrator | 2026-03-28 01:22:45.897438 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2026-03-28 01:22:45.897448 | orchestrator | Saturday 28 March 2026 01:22:44 +0000 (0:00:00.362) 0:00:04.068 ******** 2026-03-28 01:22:45.897459 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:22:45.897470 | orchestrator | 2026-03-28 01:22:45.897487 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-28 01:22:45.897505 | orchestrator | Saturday 28 March 2026 01:22:45 +0000 (0:00:00.385) 0:00:04.454 ******** 2026-03-28 01:22:45.897523 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:22:45.897542 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:22:45.897560 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:22:45.897578 | orchestrator | 2026-03-28 01:22:45.897595 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2026-03-28 01:22:45.897613 | orchestrator | Saturday 28 March 2026 01:22:45 +0000 (0:00:00.298) 0:00:04.752 ******** 2026-03-28 01:22:45.897632 | orchestrator | skipping: [testbed-node-3] => (item={'id': '4e7319709f28f26fbb13a114d5c5eff0d234451d0686f220bb11ca248445345d', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2026-03-28 01:22:45.897684 | orchestrator | skipping: [testbed-node-3] => (item={'id': '00520cf6953cf4e1bccc7b1b62a63a2920e4a2b6c33215cc5b596239c328c00d', 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-03-28 01:22:45.897737 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f819f608d3bef1a168807c8ee7e7cde1d38aff7bab53f73ce3335785a61da87e', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-03-28 01:22:45.897794 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'cc147958359287683ada85adadb88b118c9f0381d394d4ce8d52408c755c378c', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 12 minutes (healthy)'})  2026-03-28 01:22:45.897848 | orchestrator | skipping: [testbed-node-3] => (item={'id': '02d3484634e3702fddbfb0b499a4c66295016b6875e5793d0f47930d1da6d4f6', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 17 minutes'})  2026-03-28 01:22:45.897898 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f7a8ab14fdf4b5ff0abe23b0ff35aacf88bb51d6559fc0e46c277aa5b8aea8fb', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 17 minutes'})  2026-03-28 01:22:45.897919 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd9b6bea1af5023f3efb55c5828b2dfa44334f1222e664a54b738c62f3f2c7a15', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 18 minutes'})  2026-03-28 01:22:45.897961 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1400fe15750879942e28a37366f8a0d25ad83781131daab7ae37f7a077caceda', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 24 minutes'})  2026-03-28 01:22:45.897975 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c37bd49c07c1d3536de2ef6608ab7c89ad9eca0a11c72005f61e2b711ecd698f', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 25 minutes'})  2026-03-28 01:22:45.897986 | orchestrator | skipping: [testbed-node-3] => (item={'id': '3ddfdc9b02c3c5a7544c37414eba54fd7259737cc6c32634a6b1cb7c53da9eae', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 26 minutes'})  2026-03-28 01:22:45.897997 | orchestrator | ok: [testbed-node-3] => (item={'id': 'ca3bde8c4ceab44f899ef462738f92e43b32ae0cfc2cc4e0a6a4b9dd053b3adf', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 27 minutes'}) 2026-03-28 01:22:45.898008 | orchestrator | ok: [testbed-node-3] => (item={'id': '25b5079c93a90f5556620ac21f712d093aed8b139adb56d42da97db8859a4164', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 27 minutes'}) 2026-03-28 01:22:45.898093 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b8e65be88a321b67abeeda06a90b2eed4f8445946ce5a6c9657a3a0f7f9f2a08', 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 30 minutes'})  2026-03-28 01:22:45.898118 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7d33f3d2ebf0cf523424482102d01d4852003e1d8be9b12b60932a6183a633c0', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 31 minutes (healthy)'})  2026-03-28 01:22:45.898135 | orchestrator | skipping: [testbed-node-3] => (item={'id': '531ea8feaa33c88fb392201c68fa5fb069b1b329b9c7f162973927d978fd2677', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 32 minutes (healthy)'})  2026-03-28 01:22:45.898147 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5da8b0b0311acf6fdcab744f1044fc37939b9e0b7e99c783eedd948bd9ba1cd1', 'image': 'registry.osism.tech/kolla/cron:2025.1', 'name': '/cron', 'state': 'running', 'status': 'Up 33 minutes'})  2026-03-28 01:22:45.898158 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1f159fcca1d0685f1f1732ef169b7a0d22765172fed0eeebed35126acdc50c9a', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 33 minutes'})  2026-03-28 01:22:45.898169 | orchestrator | skipping: [testbed-node-3] => (item={'id': '2ab6ab2ec921b37fd019a6cfe9b756b4fd5998ebd180e53d6406e951a7579344', 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'name': '/fluentd', 'state': 'running', 'status': 'Up 34 minutes'})  2026-03-28 01:22:45.898190 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'acaaa5078bd2bd9e9cb18161f0eafa28358584f93e5b7572fe66e219775aa35c', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2026-03-28 01:22:45.898202 | orchestrator | skipping: [testbed-node-4] => (item={'id': '7f8d112b20299fac5ba0eec260ab851c9a7b1664181ad38fd8b858ff67a4678f', 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-03-28 01:22:45.898213 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c0fbafb424c25a0b85d0462b6db18d4bad54ed1d926e5519e3ebc2435a42f81b', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-03-28 01:22:45.898249 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd06cdffdfe9968d39fcab615de35fad129bcd32517a6e9b1bd532c71d776fc5f', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 12 minutes (healthy)'})  2026-03-28 01:22:46.157616 | orchestrator | skipping: [testbed-node-4] => (item={'id': '35685dc2c469c93aa130126fe5889ec1177a4ad47612eec58f064581e0ea2d6e', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 17 minutes'})  2026-03-28 01:22:46.157701 | orchestrator | skipping: [testbed-node-4] => (item={'id': '26ef85b85672a2f9d3a412229392468932d872aeb8b468b4239c9de019798507', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 17 minutes'})  2026-03-28 01:22:46.157711 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'cf5bde2163ea35797c0369299892cbaf349038ab8543033495ef825426fdb8d3', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 18 minutes'})  2026-03-28 01:22:46.157718 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'af0ff69a364a91911f4c75061c8e06c1f7b30128975f44b46c6273ac37007915', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 24 minutes'})  2026-03-28 01:22:46.157725 | orchestrator | skipping: [testbed-node-4] => (item={'id': '25f1483eebc9df9eef2af7776d0c9613cfad476e823e7c4536b7dce6070171de', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 25 minutes'})  2026-03-28 01:22:46.157732 | orchestrator | skipping: [testbed-node-4] => (item={'id': '97780906d10403a3cde8508f8ab8919bb2dd4a4637bd556c1a265488bf1eb5b4', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 26 minutes'})  2026-03-28 01:22:46.157740 | orchestrator | ok: [testbed-node-4] => (item={'id': 'c18c174deb6f1f757f14bb252ed2f6f1522f0a6fed58f7868438938814f589f4', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 27 minutes'}) 2026-03-28 01:22:46.157825 | orchestrator | ok: [testbed-node-4] => (item={'id': '9b288b5fc2aacea6354f66566e2f0c5e5c6e9c1b19702d6426fcedecccbc238e', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 27 minutes'}) 2026-03-28 01:22:46.157842 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd4134a67049a6b13c97913dd1c732211dbbf958b7fa0c51f83c939fa0b1ee52a', 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 30 minutes'})  2026-03-28 01:22:46.157854 | orchestrator | skipping: [testbed-node-4] => (item={'id': '4c7117b11195e0d64f8da6d052c1583833cc522a29fc1492d707279151938455', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 31 minutes (healthy)'})  2026-03-28 01:22:46.157883 | orchestrator | skipping: [testbed-node-4] => (item={'id': '248d14cc024e3616aa9eedd61b149e68e09af932d5253c530eb0a38b9ef73979', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 32 minutes (healthy)'})  2026-03-28 01:22:46.157891 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f2970fd033eebcff2f380c199e0f61d4ee3f816115070650f0d23bb8c044ecb1', 'image': 'registry.osism.tech/kolla/cron:2025.1', 'name': '/cron', 'state': 'running', 'status': 'Up 33 minutes'})  2026-03-28 01:22:46.157897 | orchestrator | skipping: [testbed-node-4] => (item={'id': '104d807a83eddeea7010ab82c5106243c49388a31462000efab4149e653e4d81', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 33 minutes'})  2026-03-28 01:22:46.157904 | orchestrator | skipping: [testbed-node-4] => (item={'id': '1156280a89091bf46e74fd260fb11928c18e63faaa3bb7ecdc439186efe62491', 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'name': '/fluentd', 'state': 'running', 'status': 'Up 34 minutes'})  2026-03-28 01:22:46.157911 | orchestrator | skipping: [testbed-node-5] => (item={'id': '50542490a4d969ef04dc8881bdb0558b36b0dd6a92ba5ee09db9dd55c8278dba', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2026-03-28 01:22:46.157933 | orchestrator | skipping: [testbed-node-5] => (item={'id': '28a05d7f74c10f94dec2cb45e509130f51efb9ff8860125ca029fd459825bccf', 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-03-28 01:22:46.157940 | orchestrator | skipping: [testbed-node-5] => (item={'id': '66c8ae105b42c0a6b365b63d04bc99dfaa1942285fd6963e6343962315ebb101', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-03-28 01:22:46.157947 | orchestrator | skipping: [testbed-node-5] => (item={'id': '53988c3c48d28dacc02f963a63886961baaba68ae6fe52323592d0221b7259e4', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 12 minutes (healthy)'})  2026-03-28 01:22:46.157954 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f52792f595c2ecfeaf385c1bc486ddf6f628fc387e29d5161dabd3e9a99aee2d', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 17 minutes'})  2026-03-28 01:22:46.157960 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e455d48a31b30ebe6a0478d073a101564028441389272cbee4a832df1092ecd4', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 18 minutes'})  2026-03-28 01:22:46.157966 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c91b7c628d465a3761da36a6997cb49ba0d8189befdf6c508f7d8569a5fe4287', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 18 minutes'})  2026-03-28 01:22:46.157973 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0e23f418e6e9fa3eebe12521d75774c4f731da743d4c0c5e2296d48762bf8902', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 24 minutes'})  2026-03-28 01:22:46.157979 | orchestrator | skipping: [testbed-node-5] => (item={'id': '348a5fa2d94e9de96cea79869035929ef1072db4d5124a9cf5a9d67481d477b4', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 25 minutes'})  2026-03-28 01:22:46.157990 | orchestrator | skipping: [testbed-node-5] => (item={'id': '00b044c9b37d4b2aa716a6a5aad44f1431ee3d14e5d1da211a95f759ca4005e6', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 26 minutes'})  2026-03-28 01:22:46.158001 | orchestrator | ok: [testbed-node-5] => (item={'id': 'f52a999f10d467851c5cc0eea951ff143148bf99216f48c88b5a6d6b8954c1f9', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 27 minutes'}) 2026-03-28 01:22:46.158008 | orchestrator | ok: [testbed-node-5] => (item={'id': '1e04b46592d1a1b235b6a3e2e9b941a38f7c5188de6346a01931e53ba5bbacae', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 27 minutes'}) 2026-03-28 01:22:46.158064 | orchestrator | skipping: [testbed-node-5] => (item={'id': '03daa516470cc7a66406e643448708f040b27899d589202512dd2c8b40218820', 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 30 minutes'})  2026-03-28 01:22:46.158076 | orchestrator | skipping: [testbed-node-5] => (item={'id': '1ffb2b578ad32775e129e21ff525f0a4a4646c9e458fdb0718a2e4c17dc7eb75', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 31 minutes (healthy)'})  2026-03-28 01:22:46.158086 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9b0f3bfd04e429f6c1f26382b89d28ff941601f7a4eda160559cc22d980c3c1e', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 32 minutes (healthy)'})  2026-03-28 01:22:46.158095 | orchestrator | skipping: [testbed-node-5] => (item={'id': '1a55d2ad7ab8d06afcc169041e74edb9810a8f06b3917e9906a97b613b78b7ab', 'image': 'registry.osism.tech/kolla/cron:2025.1', 'name': '/cron', 'state': 'running', 'status': 'Up 33 minutes'})  2026-03-28 01:22:46.158104 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e1f30108030f70465e44e1ec751f92a321b9ea55f4392d1bf2b885e5e5e79f7f', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 33 minutes'})  2026-03-28 01:22:46.158165 | orchestrator | skipping: [testbed-node-5] => (item={'id': '163a828cc66d9f4396befc13865cd2fe54502c82c5148f612e7af636e2920a9b', 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'name': '/fluentd', 'state': 'running', 'status': 'Up 34 minutes'})  2026-03-28 01:23:00.602269 | orchestrator | 2026-03-28 01:23:00.602354 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2026-03-28 01:23:00.602364 | orchestrator | Saturday 28 March 2026 01:22:46 +0000 (0:00:00.758) 0:00:05.511 ******** 2026-03-28 01:23:00.602369 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:23:00.602376 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:23:00.602381 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:23:00.602386 | orchestrator | 2026-03-28 01:23:00.602391 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2026-03-28 01:23:00.602397 | orchestrator | Saturday 28 March 2026 01:22:46 +0000 (0:00:00.324) 0:00:05.835 ******** 2026-03-28 01:23:00.602402 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:23:00.602407 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:23:00.602412 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:23:00.602417 | orchestrator | 2026-03-28 01:23:00.602422 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2026-03-28 01:23:00.602427 | orchestrator | Saturday 28 March 2026 01:22:47 +0000 (0:00:00.285) 0:00:06.121 ******** 2026-03-28 01:23:00.602432 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:23:00.602437 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:23:00.602442 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:23:00.602447 | orchestrator | 2026-03-28 01:23:00.602452 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-28 01:23:00.602457 | orchestrator | Saturday 28 March 2026 01:22:47 +0000 (0:00:00.372) 0:00:06.494 ******** 2026-03-28 01:23:00.602478 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:23:00.602483 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:23:00.602488 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:23:00.602493 | orchestrator | 2026-03-28 01:23:00.602498 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2026-03-28 01:23:00.602503 | orchestrator | Saturday 28 March 2026 01:22:47 +0000 (0:00:00.483) 0:00:06.977 ******** 2026-03-28 01:23:00.602508 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2026-03-28 01:23:00.602513 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2026-03-28 01:23:00.602518 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:23:00.602523 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2026-03-28 01:23:00.602528 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2026-03-28 01:23:00.602533 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:23:00.602538 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2026-03-28 01:23:00.602543 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2026-03-28 01:23:00.602548 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:23:00.602553 | orchestrator | 2026-03-28 01:23:00.602558 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2026-03-28 01:23:00.602562 | orchestrator | Saturday 28 March 2026 01:22:48 +0000 (0:00:00.329) 0:00:07.307 ******** 2026-03-28 01:23:00.602567 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:23:00.602572 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:23:00.602577 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:23:00.602582 | orchestrator | 2026-03-28 01:23:00.602587 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-03-28 01:23:00.602592 | orchestrator | Saturday 28 March 2026 01:22:48 +0000 (0:00:00.326) 0:00:07.633 ******** 2026-03-28 01:23:00.602597 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:23:00.602601 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:23:00.602606 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:23:00.602611 | orchestrator | 2026-03-28 01:23:00.602616 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-03-28 01:23:00.602621 | orchestrator | Saturday 28 March 2026 01:22:48 +0000 (0:00:00.322) 0:00:07.955 ******** 2026-03-28 01:23:00.602626 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:23:00.602631 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:23:00.602635 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:23:00.602640 | orchestrator | 2026-03-28 01:23:00.602645 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2026-03-28 01:23:00.602650 | orchestrator | Saturday 28 March 2026 01:22:49 +0000 (0:00:00.498) 0:00:08.454 ******** 2026-03-28 01:23:00.602688 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:23:00.602694 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:23:00.602699 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:23:00.602704 | orchestrator | 2026-03-28 01:23:00.602709 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-28 01:23:00.602714 | orchestrator | Saturday 28 March 2026 01:22:49 +0000 (0:00:00.331) 0:00:08.786 ******** 2026-03-28 01:23:00.602719 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:23:00.602724 | orchestrator | 2026-03-28 01:23:00.602728 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-28 01:23:00.602733 | orchestrator | Saturday 28 March 2026 01:22:49 +0000 (0:00:00.277) 0:00:09.063 ******** 2026-03-28 01:23:00.602738 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:23:00.602743 | orchestrator | 2026-03-28 01:23:00.602748 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-28 01:23:00.602753 | orchestrator | Saturday 28 March 2026 01:22:50 +0000 (0:00:00.263) 0:00:09.326 ******** 2026-03-28 01:23:00.602801 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:23:00.602806 | orchestrator | 2026-03-28 01:23:00.602811 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 01:23:00.602816 | orchestrator | Saturday 28 March 2026 01:22:50 +0000 (0:00:00.290) 0:00:09.617 ******** 2026-03-28 01:23:00.602821 | orchestrator | 2026-03-28 01:23:00.602826 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 01:23:00.602831 | orchestrator | Saturday 28 March 2026 01:22:50 +0000 (0:00:00.085) 0:00:09.703 ******** 2026-03-28 01:23:00.602836 | orchestrator | 2026-03-28 01:23:00.602841 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 01:23:00.602858 | orchestrator | Saturday 28 March 2026 01:22:50 +0000 (0:00:00.085) 0:00:09.788 ******** 2026-03-28 01:23:00.602863 | orchestrator | 2026-03-28 01:23:00.602868 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-28 01:23:00.602873 | orchestrator | Saturday 28 March 2026 01:22:50 +0000 (0:00:00.076) 0:00:09.865 ******** 2026-03-28 01:23:00.602878 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:23:00.602883 | orchestrator | 2026-03-28 01:23:00.602888 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2026-03-28 01:23:00.602893 | orchestrator | Saturday 28 March 2026 01:22:51 +0000 (0:00:00.697) 0:00:10.562 ******** 2026-03-28 01:23:00.602898 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:23:00.602902 | orchestrator | 2026-03-28 01:23:00.602907 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-28 01:23:00.602912 | orchestrator | Saturday 28 March 2026 01:22:51 +0000 (0:00:00.279) 0:00:10.841 ******** 2026-03-28 01:23:00.602917 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:23:00.602922 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:23:00.602927 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:23:00.602932 | orchestrator | 2026-03-28 01:23:00.602936 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2026-03-28 01:23:00.602941 | orchestrator | Saturday 28 March 2026 01:22:52 +0000 (0:00:00.311) 0:00:11.153 ******** 2026-03-28 01:23:00.602946 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:23:00.602951 | orchestrator | 2026-03-28 01:23:00.602956 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2026-03-28 01:23:00.602961 | orchestrator | Saturday 28 March 2026 01:22:52 +0000 (0:00:00.256) 0:00:11.410 ******** 2026-03-28 01:23:00.602966 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-28 01:23:00.602971 | orchestrator | 2026-03-28 01:23:00.602976 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2026-03-28 01:23:00.602980 | orchestrator | Saturday 28 March 2026 01:22:54 +0000 (0:00:02.348) 0:00:13.758 ******** 2026-03-28 01:23:00.602985 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:23:00.602990 | orchestrator | 2026-03-28 01:23:00.602995 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2026-03-28 01:23:00.603000 | orchestrator | Saturday 28 March 2026 01:22:54 +0000 (0:00:00.144) 0:00:13.903 ******** 2026-03-28 01:23:00.603005 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:23:00.603010 | orchestrator | 2026-03-28 01:23:00.603014 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2026-03-28 01:23:00.603019 | orchestrator | Saturday 28 March 2026 01:22:55 +0000 (0:00:00.347) 0:00:14.250 ******** 2026-03-28 01:23:00.603024 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:23:00.603029 | orchestrator | 2026-03-28 01:23:00.603034 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2026-03-28 01:23:00.603042 | orchestrator | Saturday 28 March 2026 01:22:55 +0000 (0:00:00.134) 0:00:14.384 ******** 2026-03-28 01:23:00.603047 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:23:00.603052 | orchestrator | 2026-03-28 01:23:00.603056 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-28 01:23:00.603061 | orchestrator | Saturday 28 March 2026 01:22:55 +0000 (0:00:00.141) 0:00:14.526 ******** 2026-03-28 01:23:00.603066 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:23:00.603074 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:23:00.603079 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:23:00.603084 | orchestrator | 2026-03-28 01:23:00.603089 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2026-03-28 01:23:00.603094 | orchestrator | Saturday 28 March 2026 01:22:55 +0000 (0:00:00.527) 0:00:15.053 ******** 2026-03-28 01:23:00.603098 | orchestrator | changed: [testbed-node-4] 2026-03-28 01:23:00.603103 | orchestrator | changed: [testbed-node-3] 2026-03-28 01:23:00.603108 | orchestrator | changed: [testbed-node-5] 2026-03-28 01:23:00.603113 | orchestrator | 2026-03-28 01:23:00.603118 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2026-03-28 01:23:00.603123 | orchestrator | Saturday 28 March 2026 01:22:57 +0000 (0:00:01.748) 0:00:16.802 ******** 2026-03-28 01:23:00.603128 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:23:00.603133 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:23:00.603137 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:23:00.603142 | orchestrator | 2026-03-28 01:23:00.603147 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2026-03-28 01:23:00.603152 | orchestrator | Saturday 28 March 2026 01:22:58 +0000 (0:00:00.335) 0:00:17.138 ******** 2026-03-28 01:23:00.603157 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:23:00.603162 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:23:00.603166 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:23:00.603171 | orchestrator | 2026-03-28 01:23:00.603176 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2026-03-28 01:23:00.603181 | orchestrator | Saturday 28 March 2026 01:22:59 +0000 (0:00:00.995) 0:00:18.133 ******** 2026-03-28 01:23:00.603186 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:23:00.603191 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:23:00.603195 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:23:00.603200 | orchestrator | 2026-03-28 01:23:00.603205 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2026-03-28 01:23:00.603210 | orchestrator | Saturday 28 March 2026 01:22:59 +0000 (0:00:00.347) 0:00:18.481 ******** 2026-03-28 01:23:00.603215 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:23:00.603219 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:23:00.603224 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:23:00.603229 | orchestrator | 2026-03-28 01:23:00.603234 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2026-03-28 01:23:00.603239 | orchestrator | Saturday 28 March 2026 01:22:59 +0000 (0:00:00.345) 0:00:18.827 ******** 2026-03-28 01:23:00.603243 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:23:00.603248 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:23:00.603253 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:23:00.603258 | orchestrator | 2026-03-28 01:23:00.603263 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2026-03-28 01:23:00.603267 | orchestrator | Saturday 28 March 2026 01:23:00 +0000 (0:00:00.290) 0:00:19.117 ******** 2026-03-28 01:23:00.603272 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:23:00.603277 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:23:00.603282 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:23:00.603287 | orchestrator | 2026-03-28 01:23:00.603294 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-28 01:23:08.639741 | orchestrator | Saturday 28 March 2026 01:23:00 +0000 (0:00:00.565) 0:00:19.682 ******** 2026-03-28 01:23:08.639949 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:23:08.639975 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:23:08.639991 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:23:08.640008 | orchestrator | 2026-03-28 01:23:08.640026 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2026-03-28 01:23:08.640044 | orchestrator | Saturday 28 March 2026 01:23:01 +0000 (0:00:00.538) 0:00:20.220 ******** 2026-03-28 01:23:08.640060 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:23:08.640075 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:23:08.640091 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:23:08.640135 | orchestrator | 2026-03-28 01:23:08.640153 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2026-03-28 01:23:08.640169 | orchestrator | Saturday 28 March 2026 01:23:01 +0000 (0:00:00.557) 0:00:20.778 ******** 2026-03-28 01:23:08.640184 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:23:08.640199 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:23:08.640214 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:23:08.640230 | orchestrator | 2026-03-28 01:23:08.640247 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2026-03-28 01:23:08.640262 | orchestrator | Saturday 28 March 2026 01:23:02 +0000 (0:00:00.339) 0:00:21.118 ******** 2026-03-28 01:23:08.640278 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:23:08.640295 | orchestrator | skipping: [testbed-node-4] 2026-03-28 01:23:08.640312 | orchestrator | skipping: [testbed-node-5] 2026-03-28 01:23:08.640328 | orchestrator | 2026-03-28 01:23:08.640346 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2026-03-28 01:23:08.640364 | orchestrator | Saturday 28 March 2026 01:23:02 +0000 (0:00:00.541) 0:00:21.659 ******** 2026-03-28 01:23:08.640382 | orchestrator | ok: [testbed-node-3] 2026-03-28 01:23:08.640400 | orchestrator | ok: [testbed-node-4] 2026-03-28 01:23:08.640417 | orchestrator | ok: [testbed-node-5] 2026-03-28 01:23:08.640435 | orchestrator | 2026-03-28 01:23:08.640452 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-03-28 01:23:08.640471 | orchestrator | Saturday 28 March 2026 01:23:02 +0000 (0:00:00.333) 0:00:21.992 ******** 2026-03-28 01:23:08.640488 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-28 01:23:08.640506 | orchestrator | 2026-03-28 01:23:08.640523 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-03-28 01:23:08.640541 | orchestrator | Saturday 28 March 2026 01:23:03 +0000 (0:00:00.299) 0:00:22.292 ******** 2026-03-28 01:23:08.640559 | orchestrator | skipping: [testbed-node-3] 2026-03-28 01:23:08.640576 | orchestrator | 2026-03-28 01:23:08.640593 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-28 01:23:08.640610 | orchestrator | Saturday 28 March 2026 01:23:03 +0000 (0:00:00.341) 0:00:22.633 ******** 2026-03-28 01:23:08.640643 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-28 01:23:08.640660 | orchestrator | 2026-03-28 01:23:08.640676 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-28 01:23:08.640692 | orchestrator | Saturday 28 March 2026 01:23:05 +0000 (0:00:01.857) 0:00:24.490 ******** 2026-03-28 01:23:08.640707 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-28 01:23:08.640723 | orchestrator | 2026-03-28 01:23:08.640740 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-28 01:23:08.640785 | orchestrator | Saturday 28 March 2026 01:23:05 +0000 (0:00:00.326) 0:00:24.817 ******** 2026-03-28 01:23:08.640802 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-28 01:23:08.640819 | orchestrator | 2026-03-28 01:23:08.640837 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 01:23:08.640853 | orchestrator | Saturday 28 March 2026 01:23:06 +0000 (0:00:00.279) 0:00:25.096 ******** 2026-03-28 01:23:08.640868 | orchestrator | 2026-03-28 01:23:08.640884 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 01:23:08.640900 | orchestrator | Saturday 28 March 2026 01:23:06 +0000 (0:00:00.273) 0:00:25.369 ******** 2026-03-28 01:23:08.640916 | orchestrator | 2026-03-28 01:23:08.640933 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-28 01:23:08.640949 | orchestrator | Saturday 28 March 2026 01:23:06 +0000 (0:00:00.069) 0:00:25.439 ******** 2026-03-28 01:23:08.640963 | orchestrator | 2026-03-28 01:23:08.640979 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-03-28 01:23:08.640994 | orchestrator | Saturday 28 March 2026 01:23:06 +0000 (0:00:00.074) 0:00:25.513 ******** 2026-03-28 01:23:08.641010 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-28 01:23:08.641045 | orchestrator | 2026-03-28 01:23:08.641062 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-28 01:23:08.641078 | orchestrator | Saturday 28 March 2026 01:23:07 +0000 (0:00:01.451) 0:00:26.965 ******** 2026-03-28 01:23:08.641093 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2026-03-28 01:23:08.641108 | orchestrator |  "msg": [ 2026-03-28 01:23:08.641122 | orchestrator |  "Validator run completed.", 2026-03-28 01:23:08.641139 | orchestrator |  "You can find the report file here:", 2026-03-28 01:23:08.641155 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2026-03-28T01:22:42+00:00-report.json", 2026-03-28 01:23:08.641172 | orchestrator |  "on the following host:", 2026-03-28 01:23:08.641187 | orchestrator |  "testbed-manager" 2026-03-28 01:23:08.641202 | orchestrator |  ] 2026-03-28 01:23:08.641218 | orchestrator | } 2026-03-28 01:23:08.641234 | orchestrator | 2026-03-28 01:23:08.641250 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:23:08.641267 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-28 01:23:08.641285 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-28 01:23:08.641330 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-28 01:23:08.641348 | orchestrator | 2026-03-28 01:23:08.641364 | orchestrator | 2026-03-28 01:23:08.641379 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:23:08.641395 | orchestrator | Saturday 28 March 2026 01:23:08 +0000 (0:00:00.436) 0:00:27.402 ******** 2026-03-28 01:23:08.641411 | orchestrator | =============================================================================== 2026-03-28 01:23:08.641427 | orchestrator | Get ceph osd tree ------------------------------------------------------- 2.35s 2026-03-28 01:23:08.641442 | orchestrator | Aggregate test results step one ----------------------------------------- 1.86s 2026-03-28 01:23:08.641457 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 1.75s 2026-03-28 01:23:08.641474 | orchestrator | Write report file ------------------------------------------------------- 1.45s 2026-03-28 01:23:08.641491 | orchestrator | Get timestamp for report file ------------------------------------------- 1.13s 2026-03-28 01:23:08.641508 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 1.00s 2026-03-28 01:23:08.641522 | orchestrator | Create report output directory ------------------------------------------ 0.78s 2026-03-28 01:23:08.641535 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.76s 2026-03-28 01:23:08.641550 | orchestrator | Print report file information ------------------------------------------- 0.70s 2026-03-28 01:23:08.641564 | orchestrator | Pass if count of unencrypted OSDs equals count of OSDs ------------------ 0.57s 2026-03-28 01:23:08.641579 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.56s 2026-03-28 01:23:08.641594 | orchestrator | Fail test if any sub test failed ---------------------------------------- 0.54s 2026-03-28 01:23:08.641610 | orchestrator | Prepare test data ------------------------------------------------------- 0.54s 2026-03-28 01:23:08.641626 | orchestrator | Prepare test data ------------------------------------------------------- 0.53s 2026-03-28 01:23:08.641641 | orchestrator | Set test result to failed if an OSD is not running ---------------------- 0.50s 2026-03-28 01:23:08.641656 | orchestrator | Prepare test data ------------------------------------------------------- 0.48s 2026-03-28 01:23:08.641671 | orchestrator | Calculate OSD devices for each host ------------------------------------- 0.48s 2026-03-28 01:23:08.641686 | orchestrator | Print report file information ------------------------------------------- 0.44s 2026-03-28 01:23:08.641702 | orchestrator | Flush handlers ---------------------------------------------------------- 0.42s 2026-03-28 01:23:08.641737 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.39s 2026-03-28 01:23:08.849659 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2026-03-28 01:23:08.861930 | orchestrator | + set -e 2026-03-28 01:23:08.862227 | orchestrator | + source /opt/manager-vars.sh 2026-03-28 01:23:08.862256 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-28 01:23:08.862268 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-28 01:23:08.862279 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-28 01:23:08.862300 | orchestrator | ++ CEPH_VERSION=reef 2026-03-28 01:23:08.862321 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-28 01:23:08.862341 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-28 01:23:08.862359 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-28 01:23:08.862371 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-28 01:23:08.862382 | orchestrator | ++ export OPENSTACK_VERSION=2025.1 2026-03-28 01:23:08.862392 | orchestrator | ++ OPENSTACK_VERSION=2025.1 2026-03-28 01:23:08.862403 | orchestrator | ++ export ARA=false 2026-03-28 01:23:08.862414 | orchestrator | ++ ARA=false 2026-03-28 01:23:08.862425 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-28 01:23:08.862435 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-28 01:23:08.862446 | orchestrator | ++ export TEMPEST=true 2026-03-28 01:23:08.862456 | orchestrator | ++ TEMPEST=true 2026-03-28 01:23:08.862467 | orchestrator | ++ export IS_ZUUL=true 2026-03-28 01:23:08.862478 | orchestrator | ++ IS_ZUUL=true 2026-03-28 01:23:08.862489 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.109 2026-03-28 01:23:08.862499 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.109 2026-03-28 01:23:08.862510 | orchestrator | ++ export EXTERNAL_API=false 2026-03-28 01:23:08.862520 | orchestrator | ++ EXTERNAL_API=false 2026-03-28 01:23:08.862531 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-28 01:23:08.862541 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-28 01:23:08.862552 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-28 01:23:08.862562 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-28 01:23:08.862574 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-28 01:23:08.862585 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-28 01:23:08.862595 | orchestrator | + source /etc/os-release 2026-03-28 01:23:08.862606 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.4 LTS' 2026-03-28 01:23:08.862616 | orchestrator | ++ NAME=Ubuntu 2026-03-28 01:23:08.862627 | orchestrator | ++ VERSION_ID=24.04 2026-03-28 01:23:08.862638 | orchestrator | ++ VERSION='24.04.4 LTS (Noble Numbat)' 2026-03-28 01:23:08.862664 | orchestrator | ++ VERSION_CODENAME=noble 2026-03-28 01:23:08.862686 | orchestrator | ++ ID=ubuntu 2026-03-28 01:23:08.862697 | orchestrator | ++ ID_LIKE=debian 2026-03-28 01:23:08.862707 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2026-03-28 01:23:08.862718 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2026-03-28 01:23:08.862729 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2026-03-28 01:23:08.862740 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2026-03-28 01:23:08.862797 | orchestrator | ++ UBUNTU_CODENAME=noble 2026-03-28 01:23:08.862809 | orchestrator | ++ LOGO=ubuntu-logo 2026-03-28 01:23:08.862832 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2026-03-28 01:23:08.862845 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2026-03-28 01:23:08.862858 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-03-28 01:23:08.898646 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-03-28 01:23:35.730851 | orchestrator | 2026-03-28 01:23:35.731049 | orchestrator | # Status of Elasticsearch 2026-03-28 01:23:35.731078 | orchestrator | 2026-03-28 01:23:35.731097 | orchestrator | + pushd /opt/configuration/contrib 2026-03-28 01:23:35.731115 | orchestrator | + echo 2026-03-28 01:23:35.731134 | orchestrator | + echo '# Status of Elasticsearch' 2026-03-28 01:23:35.731152 | orchestrator | + echo 2026-03-28 01:23:35.731171 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2026-03-28 01:23:35.932553 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2026-03-28 01:23:35.932648 | orchestrator | 2026-03-28 01:23:35.932661 | orchestrator | # Status of MariaDB 2026-03-28 01:23:35.932697 | orchestrator | 2026-03-28 01:23:35.932706 | orchestrator | + echo 2026-03-28 01:23:35.932715 | orchestrator | + echo '# Status of MariaDB' 2026-03-28 01:23:35.932724 | orchestrator | + echo 2026-03-28 01:23:35.933500 | orchestrator | ++ semver latest 10.0.0-0 2026-03-28 01:23:36.005455 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-28 01:23:36.005551 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-28 01:23:36.005563 | orchestrator | + osism status database 2026-03-28 01:23:37.692253 | orchestrator | 2026-03-28 01:23:37 | ERROR  | Unable to get ansible vault password 2026-03-28 01:23:37.692433 | orchestrator | 2026-03-28 01:23:37 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-28 01:23:37.692467 | orchestrator | 2026-03-28 01:23:37 | ERROR  | Dropping encrypted entries 2026-03-28 01:23:37.727688 | orchestrator | 2026-03-28 01:23:37 | INFO  | Connecting to MariaDB at 192.168.16.9 as root_shard_0... 2026-03-28 01:23:37.741774 | orchestrator | 2026-03-28 01:23:37 | INFO  | Cluster Status: Primary 2026-03-28 01:23:37.741975 | orchestrator | 2026-03-28 01:23:37 | INFO  | Connected: ON 2026-03-28 01:23:37.741990 | orchestrator | 2026-03-28 01:23:37 | INFO  | Ready: ON 2026-03-28 01:23:37.742002 | orchestrator | 2026-03-28 01:23:37 | INFO  | Cluster Size: 3 2026-03-28 01:23:37.742048 | orchestrator | 2026-03-28 01:23:37 | INFO  | Local State: Synced 2026-03-28 01:23:37.742076 | orchestrator | 2026-03-28 01:23:37 | INFO  | Cluster State UUID: 21dd4705-2a41-11f1-b6d8-6f4297da3421 2026-03-28 01:23:37.742089 | orchestrator | 2026-03-28 01:23:37 | INFO  | Cluster Members: 192.168.16.11:3306,192.168.16.12:3306,192.168.16.10:3306 2026-03-28 01:23:37.742102 | orchestrator | 2026-03-28 01:23:37 | INFO  | Galera Version: 26.4.25(r7387a566) 2026-03-28 01:23:37.742113 | orchestrator | 2026-03-28 01:23:37 | INFO  | Local Node UUID: 5ae13c14-2a41-11f1-9920-ef0764a1603e 2026-03-28 01:23:37.742125 | orchestrator | 2026-03-28 01:23:37 | INFO  | Flow Control Paused: 0.00% 2026-03-28 01:23:37.742157 | orchestrator | 2026-03-28 01:23:37 | INFO  | Recv Queue Avg: 0 2026-03-28 01:23:37.742169 | orchestrator | 2026-03-28 01:23:37 | INFO  | Send Queue Avg: 0.000144092 2026-03-28 01:23:37.742184 | orchestrator | 2026-03-28 01:23:37 | INFO  | Transactions: 4680 local commits, 6881 replicated, 77 received 2026-03-28 01:23:37.742196 | orchestrator | 2026-03-28 01:23:37 | INFO  | Conflicts: 0 cert failures, 0 bf aborts 2026-03-28 01:23:37.742209 | orchestrator | 2026-03-28 01:23:37 | INFO  | MariaDB Uptime: 24 minutes, 16 seconds 2026-03-28 01:23:37.742228 | orchestrator | 2026-03-28 01:23:37 | INFO  | Threads: 153 connected, 1 running 2026-03-28 01:23:37.742248 | orchestrator | 2026-03-28 01:23:37 | INFO  | Queries: 189611 total, 0 slow 2026-03-28 01:23:37.742271 | orchestrator | 2026-03-28 01:23:37 | INFO  | Aborted Connects: 176 2026-03-28 01:23:37.742295 | orchestrator | 2026-03-28 01:23:37 | INFO  | MariaDB Galera Cluster validation PASSED 2026-03-28 01:23:37.998829 | orchestrator | 2026-03-28 01:23:37.998930 | orchestrator | # Status of Prometheus 2026-03-28 01:23:37.998947 | orchestrator | 2026-03-28 01:23:37.998959 | orchestrator | + echo 2026-03-28 01:23:37.998971 | orchestrator | + echo '# Status of Prometheus' 2026-03-28 01:23:37.998982 | orchestrator | + echo 2026-03-28 01:23:37.998994 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2026-03-28 01:23:38.060234 | orchestrator | Unauthorized 2026-03-28 01:23:38.064080 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2026-03-28 01:23:38.125077 | orchestrator | Unauthorized 2026-03-28 01:23:38.128228 | orchestrator | 2026-03-28 01:23:38.128292 | orchestrator | # Status of RabbitMQ 2026-03-28 01:23:38.128305 | orchestrator | 2026-03-28 01:23:38.128314 | orchestrator | + echo 2026-03-28 01:23:38.128323 | orchestrator | + echo '# Status of RabbitMQ' 2026-03-28 01:23:38.128332 | orchestrator | + echo 2026-03-28 01:23:38.129668 | orchestrator | ++ semver latest 10.0.0-0 2026-03-28 01:23:38.194995 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-28 01:23:38.195130 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-28 01:23:38.195155 | orchestrator | + osism status messaging 2026-03-28 01:23:46.499495 | orchestrator | 2026-03-28 01:23:46 | ERROR  | Unable to get ansible vault password 2026-03-28 01:23:46.499662 | orchestrator | 2026-03-28 01:23:46 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-28 01:23:46.499694 | orchestrator | 2026-03-28 01:23:46 | ERROR  | Dropping encrypted entries 2026-03-28 01:23:46.534512 | orchestrator | 2026-03-28 01:23:46 | INFO  | [testbed-node-0] Connecting to RabbitMQ Management API at 192.168.16.10:15672 as openstack... 2026-03-28 01:23:46.618541 | orchestrator | 2026-03-28 01:23:46 | INFO  | [testbed-node-0] RabbitMQ Version: 4.1.8 2026-03-28 01:23:46.618914 | orchestrator | 2026-03-28 01:23:46 | INFO  | [testbed-node-0] Erlang Version: 27.3.4.1 2026-03-28 01:23:46.618943 | orchestrator | 2026-03-28 01:23:46 | INFO  | [testbed-node-0] Cluster Name: rabbit@testbed-node-0 2026-03-28 01:23:46.618955 | orchestrator | 2026-03-28 01:23:46 | INFO  | [testbed-node-0] Cluster Size: 3 2026-03-28 01:23:46.618969 | orchestrator | 2026-03-28 01:23:46 | INFO  | [testbed-node-0] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-03-28 01:23:46.618982 | orchestrator | 2026-03-28 01:23:46 | INFO  | [testbed-node-0] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-03-28 01:23:46.619009 | orchestrator | 2026-03-28 01:23:46 | INFO  | [testbed-node-0] Partitions: None (healthy) 2026-03-28 01:23:46.619021 | orchestrator | 2026-03-28 01:23:46 | INFO  | [testbed-node-0] Connections: 210, Channels: 209, Queues: 173 2026-03-28 01:23:46.619033 | orchestrator | 2026-03-28 01:23:46 | INFO  | [testbed-node-0] Messages: 231 total, 231 ready, 0 unacked 2026-03-28 01:23:46.619044 | orchestrator | 2026-03-28 01:23:46 | INFO  | [testbed-node-0] Message Rates: 12.4/s publish, 13.2/s deliver 2026-03-28 01:23:46.619054 | orchestrator | 2026-03-28 01:23:46 | INFO  | [testbed-node-0] Disk Free: 58.2 GB (limit: 0.0 GB) 2026-03-28 01:23:46.619066 | orchestrator | 2026-03-28 01:23:46 | INFO  | [testbed-node-0] Memory Used: 0.15 GB (limit: 18.80 GB) 2026-03-28 01:23:46.619076 | orchestrator | 2026-03-28 01:23:46 | INFO  | [testbed-node-0] File Descriptors: 111/1024 2026-03-28 01:23:46.619087 | orchestrator | 2026-03-28 01:23:46 | INFO  | [testbed-node-0] Sockets: 0/0 2026-03-28 01:23:46.619099 | orchestrator | 2026-03-28 01:23:46 | INFO  | [testbed-node-1] Connecting to RabbitMQ Management API at 192.168.16.11:15672 as openstack... 2026-03-28 01:23:46.694912 | orchestrator | 2026-03-28 01:23:46 | INFO  | [testbed-node-1] RabbitMQ Version: 4.1.8 2026-03-28 01:23:46.695041 | orchestrator | 2026-03-28 01:23:46 | INFO  | [testbed-node-1] Erlang Version: 27.3.4.1 2026-03-28 01:23:46.695217 | orchestrator | 2026-03-28 01:23:46 | INFO  | [testbed-node-1] Cluster Name: rabbit@testbed-node-1 2026-03-28 01:23:46.695236 | orchestrator | 2026-03-28 01:23:46 | INFO  | [testbed-node-1] Cluster Size: 3 2026-03-28 01:23:46.695250 | orchestrator | 2026-03-28 01:23:46 | INFO  | [testbed-node-1] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-03-28 01:23:46.695264 | orchestrator | 2026-03-28 01:23:46 | INFO  | [testbed-node-1] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-03-28 01:23:46.695276 | orchestrator | 2026-03-28 01:23:46 | INFO  | [testbed-node-1] Partitions: None (healthy) 2026-03-28 01:23:46.695309 | orchestrator | 2026-03-28 01:23:46 | INFO  | [testbed-node-1] Connections: 210, Channels: 209, Queues: 173 2026-03-28 01:23:46.695351 | orchestrator | 2026-03-28 01:23:46 | INFO  | [testbed-node-1] Messages: 231 total, 231 ready, 0 unacked 2026-03-28 01:23:46.695362 | orchestrator | 2026-03-28 01:23:46 | INFO  | [testbed-node-1] Message Rates: 12.4/s publish, 13.2/s deliver 2026-03-28 01:23:46.695373 | orchestrator | 2026-03-28 01:23:46 | INFO  | [testbed-node-1] Disk Free: 58.7 GB (limit: 0.0 GB) 2026-03-28 01:23:46.695384 | orchestrator | 2026-03-28 01:23:46 | INFO  | [testbed-node-1] Memory Used: 0.15 GB (limit: 18.80 GB) 2026-03-28 01:23:46.695395 | orchestrator | 2026-03-28 01:23:46 | INFO  | [testbed-node-1] File Descriptors: 113/1024 2026-03-28 01:23:46.695406 | orchestrator | 2026-03-28 01:23:46 | INFO  | [testbed-node-1] Sockets: 0/0 2026-03-28 01:23:46.695417 | orchestrator | 2026-03-28 01:23:46 | INFO  | [testbed-node-2] Connecting to RabbitMQ Management API at 192.168.16.12:15672 as openstack... 2026-03-28 01:23:46.771533 | orchestrator | 2026-03-28 01:23:46 | INFO  | [testbed-node-2] RabbitMQ Version: 4.1.8 2026-03-28 01:23:46.771642 | orchestrator | 2026-03-28 01:23:46 | INFO  | [testbed-node-2] Erlang Version: 27.3.4.1 2026-03-28 01:23:46.771656 | orchestrator | 2026-03-28 01:23:46 | INFO  | [testbed-node-2] Cluster Name: rabbit@testbed-node-2 2026-03-28 01:23:46.771668 | orchestrator | 2026-03-28 01:23:46 | INFO  | [testbed-node-2] Cluster Size: 3 2026-03-28 01:23:46.771682 | orchestrator | 2026-03-28 01:23:46 | INFO  | [testbed-node-2] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-03-28 01:23:46.771707 | orchestrator | 2026-03-28 01:23:46 | INFO  | [testbed-node-2] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-03-28 01:23:46.771719 | orchestrator | 2026-03-28 01:23:46 | INFO  | [testbed-node-2] Partitions: None (healthy) 2026-03-28 01:23:46.771835 | orchestrator | 2026-03-28 01:23:46 | INFO  | [testbed-node-2] Connections: 210, Channels: 209, Queues: 173 2026-03-28 01:23:46.771851 | orchestrator | 2026-03-28 01:23:46 | INFO  | [testbed-node-2] Messages: 231 total, 231 ready, 0 unacked 2026-03-28 01:23:46.771863 | orchestrator | 2026-03-28 01:23:46 | INFO  | [testbed-node-2] Message Rates: 12.4/s publish, 13.2/s deliver 2026-03-28 01:23:46.771874 | orchestrator | 2026-03-28 01:23:46 | INFO  | [testbed-node-2] Disk Free: 58.5 GB (limit: 0.0 GB) 2026-03-28 01:23:46.772121 | orchestrator | 2026-03-28 01:23:46 | INFO  | [testbed-node-2] Memory Used: 0.15 GB (limit: 18.80 GB) 2026-03-28 01:23:46.772142 | orchestrator | 2026-03-28 01:23:46 | INFO  | [testbed-node-2] File Descriptors: 107/1024 2026-03-28 01:23:46.772150 | orchestrator | 2026-03-28 01:23:46 | INFO  | [testbed-node-2] Sockets: 0/0 2026-03-28 01:23:46.772161 | orchestrator | 2026-03-28 01:23:46 | INFO  | RabbitMQ Cluster validation PASSED 2026-03-28 01:23:47.082378 | orchestrator | 2026-03-28 01:23:47.082481 | orchestrator | # Status of Redis 2026-03-28 01:23:47.082497 | orchestrator | 2026-03-28 01:23:47.082509 | orchestrator | + echo 2026-03-28 01:23:47.082521 | orchestrator | + echo '# Status of Redis' 2026-03-28 01:23:47.082532 | orchestrator | + echo 2026-03-28 01:23:47.082545 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2026-03-28 01:23:47.088021 | orchestrator | TCP OK - 0.001 second response time on 192.168.16.10 port 6379|time=0.001490s;;;0.000000;10.000000 2026-03-28 01:23:47.088096 | orchestrator | 2026-03-28 01:23:47.088109 | orchestrator | # Create backup of MariaDB database 2026-03-28 01:23:47.088122 | orchestrator | 2026-03-28 01:23:47.088134 | orchestrator | + popd 2026-03-28 01:23:47.088145 | orchestrator | + echo 2026-03-28 01:23:47.088156 | orchestrator | + echo '# Create backup of MariaDB database' 2026-03-28 01:23:47.088167 | orchestrator | + echo 2026-03-28 01:23:47.088178 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2026-03-28 01:23:48.475861 | orchestrator | 2026-03-28 01:23:48 | INFO  | Prepare task for execution of mariadb_backup. 2026-03-28 01:23:48.545052 | orchestrator | 2026-03-28 01:23:48 | INFO  | Task 70a7624a-a20b-4eb9-87a4-2ac99ea49b24 (mariadb_backup) was prepared for execution. 2026-03-28 01:23:48.545152 | orchestrator | 2026-03-28 01:23:48 | INFO  | It takes a moment until task 70a7624a-a20b-4eb9-87a4-2ac99ea49b24 (mariadb_backup) has been started and output is visible here. 2026-03-28 01:24:50.746545 | orchestrator | 2026-03-28 01:24:50.746634 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-28 01:24:50.746644 | orchestrator | 2026-03-28 01:24:50.746651 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-28 01:24:50.746658 | orchestrator | Saturday 28 March 2026 01:23:52 +0000 (0:00:00.265) 0:00:00.265 ******** 2026-03-28 01:24:50.746665 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:24:50.746672 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:24:50.746678 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:24:50.746685 | orchestrator | 2026-03-28 01:24:50.746728 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-28 01:24:50.746738 | orchestrator | Saturday 28 March 2026 01:23:52 +0000 (0:00:00.339) 0:00:00.604 ******** 2026-03-28 01:24:50.746751 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-03-28 01:24:50.746758 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-03-28 01:24:50.746765 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-03-28 01:24:50.746771 | orchestrator | 2026-03-28 01:24:50.746777 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-03-28 01:24:50.746784 | orchestrator | 2026-03-28 01:24:50.746790 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-03-28 01:24:50.746797 | orchestrator | Saturday 28 March 2026 01:23:52 +0000 (0:00:00.435) 0:00:01.040 ******** 2026-03-28 01:24:50.746804 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-28 01:24:50.746810 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-28 01:24:50.746817 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-28 01:24:50.746823 | orchestrator | 2026-03-28 01:24:50.746829 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-28 01:24:50.746835 | orchestrator | Saturday 28 March 2026 01:23:53 +0000 (0:00:00.449) 0:00:01.489 ******** 2026-03-28 01:24:50.746842 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-28 01:24:50.746850 | orchestrator | 2026-03-28 01:24:50.746856 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2026-03-28 01:24:50.746862 | orchestrator | Saturday 28 March 2026 01:23:53 +0000 (0:00:00.701) 0:00:02.190 ******** 2026-03-28 01:24:50.746869 | orchestrator | ok: [testbed-node-1] 2026-03-28 01:24:50.746875 | orchestrator | ok: [testbed-node-2] 2026-03-28 01:24:50.746881 | orchestrator | ok: [testbed-node-0] 2026-03-28 01:24:50.746887 | orchestrator | 2026-03-28 01:24:50.746893 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2026-03-28 01:24:50.746900 | orchestrator | Saturday 28 March 2026 01:23:57 +0000 (0:00:03.811) 0:00:06.002 ******** 2026-03-28 01:24:50.746906 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:24:50.746914 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:24:50.746920 | orchestrator | changed: [testbed-node-0] 2026-03-28 01:24:50.746926 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-03-28 01:24:50.746933 | orchestrator | 2026-03-28 01:24:50.746939 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-28 01:24:50.746945 | orchestrator | skipping: no hosts matched 2026-03-28 01:24:50.746951 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-03-28 01:24:50.746958 | orchestrator | 2026-03-28 01:24:50.746964 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-28 01:24:50.746984 | orchestrator | skipping: no hosts matched 2026-03-28 01:24:50.746991 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-28 01:24:50.746997 | orchestrator | mariadb_bootstrap_restart 2026-03-28 01:24:50.747003 | orchestrator | 2026-03-28 01:24:50.747009 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-03-28 01:24:50.747016 | orchestrator | skipping: no hosts matched 2026-03-28 01:24:50.747022 | orchestrator | 2026-03-28 01:24:50.747028 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-03-28 01:24:50.747034 | orchestrator | 2026-03-28 01:24:50.747040 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-03-28 01:24:50.747046 | orchestrator | Saturday 28 March 2026 01:24:49 +0000 (0:00:52.020) 0:00:58.023 ******** 2026-03-28 01:24:50.747052 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:24:50.747058 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:24:50.747065 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:24:50.747071 | orchestrator | 2026-03-28 01:24:50.747077 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-03-28 01:24:50.747083 | orchestrator | Saturday 28 March 2026 01:24:50 +0000 (0:00:00.315) 0:00:58.339 ******** 2026-03-28 01:24:50.747089 | orchestrator | skipping: [testbed-node-0] 2026-03-28 01:24:50.747095 | orchestrator | skipping: [testbed-node-1] 2026-03-28 01:24:50.747101 | orchestrator | skipping: [testbed-node-2] 2026-03-28 01:24:50.747107 | orchestrator | 2026-03-28 01:24:50.747113 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:24:50.747122 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-28 01:24:50.747129 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-28 01:24:50.747135 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-28 01:24:50.747142 | orchestrator | 2026-03-28 01:24:50.747148 | orchestrator | 2026-03-28 01:24:50.747154 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:24:50.747160 | orchestrator | Saturday 28 March 2026 01:24:50 +0000 (0:00:00.255) 0:00:58.594 ******** 2026-03-28 01:24:50.747166 | orchestrator | =============================================================================== 2026-03-28 01:24:50.747172 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 52.02s 2026-03-28 01:24:50.747190 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.81s 2026-03-28 01:24:50.747196 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.70s 2026-03-28 01:24:50.747202 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.45s 2026-03-28 01:24:50.747208 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.44s 2026-03-28 01:24:50.747215 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.34s 2026-03-28 01:24:50.747221 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.32s 2026-03-28 01:24:50.747227 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.26s 2026-03-28 01:24:50.981929 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2026-03-28 01:24:50.991148 | orchestrator | + set -e 2026-03-28 01:24:50.991394 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-28 01:24:50.991425 | orchestrator | ++ export INTERACTIVE=false 2026-03-28 01:24:50.991446 | orchestrator | ++ INTERACTIVE=false 2026-03-28 01:24:50.991465 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-28 01:24:50.991484 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-28 01:24:50.991521 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-28 01:24:50.993388 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-28 01:24:51.000041 | orchestrator | 2026-03-28 01:24:51.000117 | orchestrator | # OpenStack endpoints 2026-03-28 01:24:51.000137 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-28 01:24:51.000150 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-28 01:24:51.000160 | orchestrator | + export OS_CLOUD=admin 2026-03-28 01:24:51.000170 | orchestrator | + OS_CLOUD=admin 2026-03-28 01:24:51.000180 | orchestrator | + echo 2026-03-28 01:24:51.000190 | orchestrator | + echo '# OpenStack endpoints' 2026-03-28 01:24:51.000199 | orchestrator | 2026-03-28 01:24:51.000209 | orchestrator | + echo 2026-03-28 01:24:51.000220 | orchestrator | + openstack endpoint list 2026-03-28 01:24:54.204516 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-03-28 01:24:54.204656 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2026-03-28 01:24:54.204677 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-03-28 01:24:54.204719 | orchestrator | | 0281cc31406c4a6084640bb22079bd8c | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2026-03-28 01:24:54.204733 | orchestrator | | 06fc34541f8b4e6c84477fc6b7d32155 | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2026-03-28 01:24:54.204747 | orchestrator | | 0def1e1d8dbe4e99a745204b69c81702 | RegionOne | cinder | block-storage | True | public | https://api.testbed.osism.xyz:8776/v3 | 2026-03-28 01:24:54.204762 | orchestrator | | 30760b2514b647c0915fce68357cb353 | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-03-28 01:24:54.204778 | orchestrator | | 65c8602a68184b4f9c627ba4ec62d655 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2026-03-28 01:24:54.204840 | orchestrator | | 691737337dd9423ab032dfb229a14d8b | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2026-03-28 01:24:54.204853 | orchestrator | | 6b1033a78f2641ae820c93cd913f2c70 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-03-28 01:24:54.204864 | orchestrator | | 6f67015e6b644aa781bc632eb7b06291 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2026-03-28 01:24:54.204874 | orchestrator | | 7b81550877c841e5821a1307ae4f2381 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2026-03-28 01:24:54.204884 | orchestrator | | 8255092c66c84364912ae34cea2c94f2 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2026-03-28 01:24:54.204893 | orchestrator | | 86a7085f91294a90b4b21438d29f64af | RegionOne | cinder | block-storage | True | internal | https://api-int.testbed.osism.xyz:8776/v3 | 2026-03-28 01:24:54.204902 | orchestrator | | 92cbdac7c8a749299247f8f5c76526cd | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2026-03-28 01:24:54.204911 | orchestrator | | 93e2d16ed22b4a24b0574987660a5f7d | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2026-03-28 01:24:54.204920 | orchestrator | | 9f18e2908ccb49d2a40c70ff5519d17a | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2026-03-28 01:24:54.204929 | orchestrator | | a89b7b2b2767424d98eb2657ea35c700 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-03-28 01:24:54.204962 | orchestrator | | a953b125981347db8a4e5a49ea97e745 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2026-03-28 01:24:54.204972 | orchestrator | | b1b16354b6e540eb86ae432947451bbb | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2026-03-28 01:24:54.204981 | orchestrator | | b29edacffeca4cd2a69a84dda4e02919 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2026-03-28 01:24:54.204990 | orchestrator | | bd866d01376c41db8a21b49dfc4fad0f | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2026-03-28 01:24:54.204999 | orchestrator | | bf1dd40a72f643678961371f47ef40b3 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2026-03-28 01:24:54.205040 | orchestrator | | c4f37d3e5b36472298681d7458926cc7 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2026-03-28 01:24:54.205052 | orchestrator | | c7b01ea73c484e06a050ff4ace84aa96 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2026-03-28 01:24:54.205063 | orchestrator | | e25e5da180a24acaa56653444fa319f1 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2026-03-28 01:24:54.205073 | orchestrator | | e420bb0674e2442f866abecb31a47e77 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-03-28 01:24:54.205084 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-03-28 01:24:54.479164 | orchestrator | 2026-03-28 01:24:54.479258 | orchestrator | # Cinder 2026-03-28 01:24:54.479272 | orchestrator | 2026-03-28 01:24:54.479283 | orchestrator | + echo 2026-03-28 01:24:54.479293 | orchestrator | + echo '# Cinder' 2026-03-28 01:24:54.479303 | orchestrator | + echo 2026-03-28 01:24:54.479313 | orchestrator | + openstack volume service list 2026-03-28 01:24:58.429761 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-03-28 01:24:58.429857 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2026-03-28 01:24:58.429869 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-03-28 01:24:58.429879 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2026-03-28T01:24:57.000000 | 2026-03-28 01:24:58.429888 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2026-03-28T01:24:57.000000 | 2026-03-28 01:24:58.429897 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2026-03-28T01:24:57.000000 | 2026-03-28 01:24:58.429906 | orchestrator | | cinder-volume | testbed-node-0@rbd-volumes | nova | enabled | up | 2026-03-28T01:24:57.000000 | 2026-03-28 01:24:58.429914 | orchestrator | | cinder-volume | testbed-node-2@rbd-volumes | nova | enabled | up | 2026-03-28T01:24:51.000000 | 2026-03-28 01:24:58.429923 | orchestrator | | cinder-volume | testbed-node-1@rbd-volumes | nova | enabled | up | 2026-03-28T01:24:54.000000 | 2026-03-28 01:24:58.429931 | orchestrator | | cinder-backup | testbed-node-0 | nova | enabled | up | 2026-03-28T01:24:54.000000 | 2026-03-28 01:24:58.429940 | orchestrator | | cinder-backup | testbed-node-1 | nova | enabled | up | 2026-03-28T01:24:56.000000 | 2026-03-28 01:24:58.429949 | orchestrator | | cinder-backup | testbed-node-2 | nova | enabled | up | 2026-03-28T01:24:56.000000 | 2026-03-28 01:24:58.429958 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-03-28 01:24:58.728912 | orchestrator | 2026-03-28 01:24:58.729006 | orchestrator | # Neutron 2026-03-28 01:24:58.729018 | orchestrator | 2026-03-28 01:24:58.729026 | orchestrator | + echo 2026-03-28 01:24:58.729034 | orchestrator | + echo '# Neutron' 2026-03-28 01:24:58.729042 | orchestrator | + echo 2026-03-28 01:24:58.729050 | orchestrator | + openstack network agent list 2026-03-28 01:25:01.687195 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-03-28 01:25:01.687307 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2026-03-28 01:25:01.687322 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-03-28 01:25:01.687334 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2026-03-28 01:25:01.687345 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2026-03-28 01:25:01.687384 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2026-03-28 01:25:01.687396 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2026-03-28 01:25:01.687426 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2026-03-28 01:25:01.687437 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2026-03-28 01:25:01.687448 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2026-03-28 01:25:01.687459 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2026-03-28 01:25:01.687470 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2026-03-28 01:25:01.687481 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-03-28 01:25:02.012328 | orchestrator | + openstack network service provider list 2026-03-28 01:25:04.655948 | orchestrator | +---------------+------+---------+ 2026-03-28 01:25:04.656050 | orchestrator | | Service Type | Name | Default | 2026-03-28 01:25:04.656064 | orchestrator | +---------------+------+---------+ 2026-03-28 01:25:04.656076 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2026-03-28 01:25:04.656086 | orchestrator | +---------------+------+---------+ 2026-03-28 01:25:04.961446 | orchestrator | 2026-03-28 01:25:04.961538 | orchestrator | # Nova 2026-03-28 01:25:04.961550 | orchestrator | 2026-03-28 01:25:04.961558 | orchestrator | + echo 2026-03-28 01:25:04.961564 | orchestrator | + echo '# Nova' 2026-03-28 01:25:04.961572 | orchestrator | + echo 2026-03-28 01:25:04.961579 | orchestrator | + openstack compute service list 2026-03-28 01:25:08.432868 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-03-28 01:25:08.432990 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2026-03-28 01:25:08.433007 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-03-28 01:25:08.433020 | orchestrator | | f8b44cb4-6f4e-46bf-8ebd-45167cba59be | nova-scheduler | testbed-node-1 | internal | enabled | up | 2026-03-28T01:25:02.000000 | 2026-03-28 01:25:08.433031 | orchestrator | | edadc898-5fd8-47c3-8ba8-d1de9c6e2971 | nova-scheduler | testbed-node-2 | internal | enabled | up | 2026-03-28T01:25:03.000000 | 2026-03-28 01:25:08.433069 | orchestrator | | 138d5d56-f054-4cc4-8163-72a4fbd14baa | nova-scheduler | testbed-node-0 | internal | enabled | up | 2026-03-28T01:25:01.000000 | 2026-03-28 01:25:08.433081 | orchestrator | | 1502a69d-1e5e-43d9-97b8-742ca5f5bc89 | nova-conductor | testbed-node-0 | internal | enabled | up | 2026-03-28T01:25:07.000000 | 2026-03-28 01:25:08.433093 | orchestrator | | f711be41-faf8-4072-aa17-de02f223c25c | nova-conductor | testbed-node-1 | internal | enabled | up | 2026-03-28T01:24:58.000000 | 2026-03-28 01:25:08.433103 | orchestrator | | c3c5d136-4d9b-435f-83ee-f2ef65c9e9d4 | nova-conductor | testbed-node-2 | internal | enabled | up | 2026-03-28T01:24:58.000000 | 2026-03-28 01:25:08.433115 | orchestrator | | d5a321b5-9814-49f5-a60d-cc18dd8346a3 | nova-compute | testbed-node-5 | nova | enabled | up | 2026-03-28T01:25:06.000000 | 2026-03-28 01:25:08.433125 | orchestrator | | 52a71178-0ace-4de1-85c7-00326787e6a0 | nova-compute | testbed-node-3 | nova | enabled | up | 2026-03-28T01:25:07.000000 | 2026-03-28 01:25:08.433136 | orchestrator | | 5904e0e8-2d8d-46b0-b86b-27f729709ab3 | nova-compute | testbed-node-4 | nova | enabled | up | 2026-03-28T01:25:07.000000 | 2026-03-28 01:25:08.433147 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-03-28 01:25:08.721146 | orchestrator | + openstack hypervisor list 2026-03-28 01:25:11.512652 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-03-28 01:25:11.512810 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2026-03-28 01:25:11.512828 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-03-28 01:25:11.512840 | orchestrator | | 83622f49-370d-49ef-a754-8b4d9b198041 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2026-03-28 01:25:11.512852 | orchestrator | | 44a1ef28-6df2-4f86-9b1e-7684efd211eb | testbed-node-3 | QEMU | 192.168.16.13 | up | 2026-03-28 01:25:11.512863 | orchestrator | | 4196b7fc-3792-4a5b-9f42-f86f0afb6cce | testbed-node-4 | QEMU | 192.168.16.14 | up | 2026-03-28 01:25:11.512888 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-03-28 01:25:11.816419 | orchestrator | 2026-03-28 01:25:11.816578 | orchestrator | # Run OpenStack test play 2026-03-28 01:25:11.816597 | orchestrator | 2026-03-28 01:25:11.816610 | orchestrator | + echo 2026-03-28 01:25:11.816621 | orchestrator | + echo '# Run OpenStack test play' 2026-03-28 01:25:11.816634 | orchestrator | + echo 2026-03-28 01:25:11.816646 | orchestrator | + osism apply --environment openstack test 2026-03-28 01:25:13.226244 | orchestrator | 2026-03-28 01:25:13 | INFO  | Trying to run play test in environment openstack 2026-03-28 01:25:23.293334 | orchestrator | 2026-03-28 01:25:23 | INFO  | Prepare task for execution of test. 2026-03-28 01:25:23.378531 | orchestrator | 2026-03-28 01:25:23 | INFO  | Task bd811a32-c792-404c-8fc9-bc513023d8d3 (test) was prepared for execution. 2026-03-28 01:25:23.378740 | orchestrator | 2026-03-28 01:25:23 | INFO  | It takes a moment until task bd811a32-c792-404c-8fc9-bc513023d8d3 (test) has been started and output is visible here. 2026-03-28 01:28:21.997849 | orchestrator | 2026-03-28 01:28:21.997959 | orchestrator | PLAY [Create test project] ***************************************************** 2026-03-28 01:28:21.997975 | orchestrator | 2026-03-28 01:28:21.997987 | orchestrator | TASK [Create test domain] ****************************************************** 2026-03-28 01:28:21.997999 | orchestrator | Saturday 28 March 2026 01:25:26 +0000 (0:00:00.134) 0:00:00.134 ******** 2026-03-28 01:28:21.998011 | orchestrator | changed: [localhost] 2026-03-28 01:28:21.998074 | orchestrator | 2026-03-28 01:28:21.998087 | orchestrator | TASK [Create test-admin user] ************************************************** 2026-03-28 01:28:21.998100 | orchestrator | Saturday 28 March 2026 01:25:30 +0000 (0:00:04.101) 0:00:04.236 ******** 2026-03-28 01:28:21.998136 | orchestrator | changed: [localhost] 2026-03-28 01:28:21.998149 | orchestrator | 2026-03-28 01:28:21.998162 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2026-03-28 01:28:21.998174 | orchestrator | Saturday 28 March 2026 01:25:35 +0000 (0:00:04.719) 0:00:08.956 ******** 2026-03-28 01:28:21.998187 | orchestrator | changed: [localhost] 2026-03-28 01:28:21.998199 | orchestrator | 2026-03-28 01:28:21.998212 | orchestrator | TASK [Create test project] ***************************************************** 2026-03-28 01:28:21.998224 | orchestrator | Saturday 28 March 2026 01:25:42 +0000 (0:00:07.188) 0:00:16.144 ******** 2026-03-28 01:28:21.998272 | orchestrator | changed: [localhost] 2026-03-28 01:28:21.998284 | orchestrator | 2026-03-28 01:28:21.998295 | orchestrator | TASK [Create test user] ******************************************************** 2026-03-28 01:28:21.998307 | orchestrator | Saturday 28 March 2026 01:25:47 +0000 (0:00:04.692) 0:00:20.836 ******** 2026-03-28 01:28:21.998318 | orchestrator | changed: [localhost] 2026-03-28 01:28:21.998330 | orchestrator | 2026-03-28 01:28:21.998342 | orchestrator | TASK [Add member roles to user test] ******************************************* 2026-03-28 01:28:21.998353 | orchestrator | Saturday 28 March 2026 01:25:52 +0000 (0:00:04.559) 0:00:25.395 ******** 2026-03-28 01:28:21.998364 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2026-03-28 01:28:21.998377 | orchestrator | changed: [localhost] => (item=member) 2026-03-28 01:28:21.998389 | orchestrator | changed: [localhost] => (item=creator) 2026-03-28 01:28:21.998400 | orchestrator | 2026-03-28 01:28:21.998412 | orchestrator | TASK [Create test server group] ************************************************ 2026-03-28 01:28:21.998423 | orchestrator | Saturday 28 March 2026 01:26:05 +0000 (0:00:13.353) 0:00:38.749 ******** 2026-03-28 01:28:21.998435 | orchestrator | changed: [localhost] 2026-03-28 01:28:21.998446 | orchestrator | 2026-03-28 01:28:21.998457 | orchestrator | TASK [Create ssh security group] *********************************************** 2026-03-28 01:28:21.998468 | orchestrator | Saturday 28 March 2026 01:26:10 +0000 (0:00:05.222) 0:00:43.971 ******** 2026-03-28 01:28:21.998479 | orchestrator | changed: [localhost] 2026-03-28 01:28:21.998489 | orchestrator | 2026-03-28 01:28:21.998501 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2026-03-28 01:28:21.998511 | orchestrator | Saturday 28 March 2026 01:26:16 +0000 (0:00:05.429) 0:00:49.401 ******** 2026-03-28 01:28:21.998522 | orchestrator | changed: [localhost] 2026-03-28 01:28:21.998533 | orchestrator | 2026-03-28 01:28:21.998545 | orchestrator | TASK [Create icmp security group] ********************************************** 2026-03-28 01:28:21.998575 | orchestrator | Saturday 28 March 2026 01:26:20 +0000 (0:00:04.668) 0:00:54.069 ******** 2026-03-28 01:28:21.998586 | orchestrator | changed: [localhost] 2026-03-28 01:28:21.998598 | orchestrator | 2026-03-28 01:28:21.998609 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2026-03-28 01:28:21.998622 | orchestrator | Saturday 28 March 2026 01:26:25 +0000 (0:00:04.386) 0:00:58.456 ******** 2026-03-28 01:28:21.998634 | orchestrator | changed: [localhost] 2026-03-28 01:28:21.998646 | orchestrator | 2026-03-28 01:28:21.998658 | orchestrator | TASK [Create test keypair] ***************************************************** 2026-03-28 01:28:21.998671 | orchestrator | Saturday 28 March 2026 01:26:29 +0000 (0:00:04.585) 0:01:03.041 ******** 2026-03-28 01:28:21.998683 | orchestrator | changed: [localhost] 2026-03-28 01:28:21.998695 | orchestrator | 2026-03-28 01:28:21.998707 | orchestrator | TASK [Create test network] ***************************************************** 2026-03-28 01:28:21.998720 | orchestrator | Saturday 28 March 2026 01:26:34 +0000 (0:00:04.514) 0:01:07.556 ******** 2026-03-28 01:28:21.998733 | orchestrator | changed: [localhost] 2026-03-28 01:28:21.998745 | orchestrator | 2026-03-28 01:28:21.998757 | orchestrator | TASK [Create test subnet] ****************************************************** 2026-03-28 01:28:21.998770 | orchestrator | Saturday 28 March 2026 01:26:39 +0000 (0:00:05.539) 0:01:13.095 ******** 2026-03-28 01:28:21.998782 | orchestrator | changed: [localhost] 2026-03-28 01:28:21.998794 | orchestrator | 2026-03-28 01:28:21.998806 | orchestrator | TASK [Create test router] ****************************************************** 2026-03-28 01:28:21.998828 | orchestrator | Saturday 28 March 2026 01:26:45 +0000 (0:00:06.123) 0:01:19.218 ******** 2026-03-28 01:28:21.998840 | orchestrator | changed: [localhost] 2026-03-28 01:28:21.998852 | orchestrator | 2026-03-28 01:28:21.998865 | orchestrator | PLAY [Manage test instances and volumes] *************************************** 2026-03-28 01:28:21.998876 | orchestrator | 2026-03-28 01:28:21.998888 | orchestrator | TASK [Get test server group] *************************************************** 2026-03-28 01:28:21.998900 | orchestrator | Saturday 28 March 2026 01:26:58 +0000 (0:00:12.621) 0:01:31.840 ******** 2026-03-28 01:28:21.998913 | orchestrator | ok: [localhost] 2026-03-28 01:28:21.998925 | orchestrator | 2026-03-28 01:28:21.998938 | orchestrator | TASK [Detach test volume] ****************************************************** 2026-03-28 01:28:21.998950 | orchestrator | Saturday 28 March 2026 01:27:03 +0000 (0:00:04.572) 0:01:36.412 ******** 2026-03-28 01:28:21.998963 | orchestrator | skipping: [localhost] 2026-03-28 01:28:21.998975 | orchestrator | 2026-03-28 01:28:21.998988 | orchestrator | TASK [Delete test volume] ****************************************************** 2026-03-28 01:28:21.999000 | orchestrator | Saturday 28 March 2026 01:27:03 +0000 (0:00:00.059) 0:01:36.471 ******** 2026-03-28 01:28:21.999013 | orchestrator | skipping: [localhost] 2026-03-28 01:28:21.999025 | orchestrator | 2026-03-28 01:28:21.999038 | orchestrator | TASK [Delete test instances] *************************************************** 2026-03-28 01:28:21.999050 | orchestrator | Saturday 28 March 2026 01:27:03 +0000 (0:00:00.104) 0:01:36.576 ******** 2026-03-28 01:28:21.999076 | orchestrator | skipping: [localhost] => (item=test-4)  2026-03-28 01:28:21.999088 | orchestrator | skipping: [localhost] => (item=test-3)  2026-03-28 01:28:21.999119 | orchestrator | skipping: [localhost] => (item=test-2)  2026-03-28 01:28:21.999131 | orchestrator | skipping: [localhost] => (item=test-1)  2026-03-28 01:28:21.999143 | orchestrator | skipping: [localhost] => (item=test)  2026-03-28 01:28:21.999154 | orchestrator | skipping: [localhost] 2026-03-28 01:28:21.999165 | orchestrator | 2026-03-28 01:28:21.999177 | orchestrator | TASK [Wait for instance deletion to complete] ********************************** 2026-03-28 01:28:21.999188 | orchestrator | Saturday 28 March 2026 01:27:03 +0000 (0:00:00.186) 0:01:36.763 ******** 2026-03-28 01:28:21.999200 | orchestrator | skipping: [localhost] 2026-03-28 01:28:21.999211 | orchestrator | 2026-03-28 01:28:21.999223 | orchestrator | TASK [Create test instances] *************************************************** 2026-03-28 01:28:21.999234 | orchestrator | Saturday 28 March 2026 01:27:03 +0000 (0:00:00.158) 0:01:36.922 ******** 2026-03-28 01:28:21.999246 | orchestrator | changed: [localhost] => (item=test) 2026-03-28 01:28:21.999257 | orchestrator | changed: [localhost] => (item=test-1) 2026-03-28 01:28:21.999269 | orchestrator | changed: [localhost] => (item=test-2) 2026-03-28 01:28:21.999280 | orchestrator | changed: [localhost] => (item=test-3) 2026-03-28 01:28:21.999292 | orchestrator | changed: [localhost] => (item=test-4) 2026-03-28 01:28:21.999303 | orchestrator | 2026-03-28 01:28:21.999314 | orchestrator | TASK [Wait for instance creation to complete] ********************************** 2026-03-28 01:28:21.999326 | orchestrator | Saturday 28 March 2026 01:27:08 +0000 (0:00:05.340) 0:01:42.262 ******** 2026-03-28 01:28:21.999338 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-03-28 01:28:21.999350 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (59 retries left). 2026-03-28 01:28:21.999362 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (58 retries left). 2026-03-28 01:28:21.999373 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (57 retries left). 2026-03-28 01:28:21.999385 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (56 retries left). 2026-03-28 01:28:21.999398 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j68772561230.2840', 'results_file': '/ansible/.ansible_async/j68772561230.2840', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-03-28 01:28:21.999419 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j491959031014.2865', 'results_file': '/ansible/.ansible_async/j491959031014.2865', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-03-28 01:28:21.999429 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j660552355783.2890', 'results_file': '/ansible/.ansible_async/j660552355783.2890', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-03-28 01:28:21.999440 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j992438853317.2915', 'results_file': '/ansible/.ansible_async/j992438853317.2915', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-03-28 01:28:21.999450 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j47014743357.2940', 'results_file': '/ansible/.ansible_async/j47014743357.2940', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-03-28 01:28:21.999461 | orchestrator | 2026-03-28 01:28:21.999472 | orchestrator | TASK [Add metadata to instances] *********************************************** 2026-03-28 01:28:21.999483 | orchestrator | Saturday 28 March 2026 01:28:07 +0000 (0:00:58.560) 0:02:40.823 ******** 2026-03-28 01:28:21.999493 | orchestrator | changed: [localhost] => (item=test) 2026-03-28 01:28:21.999504 | orchestrator | changed: [localhost] => (item=test-1) 2026-03-28 01:28:21.999515 | orchestrator | changed: [localhost] => (item=test-2) 2026-03-28 01:28:21.999526 | orchestrator | changed: [localhost] => (item=test-3) 2026-03-28 01:28:21.999537 | orchestrator | changed: [localhost] => (item=test-4) 2026-03-28 01:28:21.999549 | orchestrator | 2026-03-28 01:28:21.999585 | orchestrator | TASK [Wait for metadata to be added] ******************************************* 2026-03-28 01:28:21.999594 | orchestrator | Saturday 28 March 2026 01:28:12 +0000 (0:00:04.895) 0:02:45.718 ******** 2026-03-28 01:28:21.999604 | orchestrator | FAILED - RETRYING: [localhost]: Wait for metadata to be added (30 retries left). 2026-03-28 01:28:21.999616 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j212246284534.3051', 'results_file': '/ansible/.ansible_async/j212246284534.3051', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-03-28 01:28:21.999626 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j735203033141.3076', 'results_file': '/ansible/.ansible_async/j735203033141.3076', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-03-28 01:28:21.999637 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j962641897794.3101', 'results_file': '/ansible/.ansible_async/j962641897794.3101', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-03-28 01:28:21.999656 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j59888162996.3126', 'results_file': '/ansible/.ansible_async/j59888162996.3126', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-03-28 01:29:08.621244 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j697151925070.3151', 'results_file': '/ansible/.ansible_async/j697151925070.3151', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-03-28 01:29:08.621365 | orchestrator | 2026-03-28 01:29:08.621382 | orchestrator | TASK [Add tag to instances] **************************************************** 2026-03-28 01:29:08.621396 | orchestrator | Saturday 28 March 2026 01:28:22 +0000 (0:00:10.470) 0:02:56.189 ******** 2026-03-28 01:29:08.621404 | orchestrator | changed: [localhost] => (item=test) 2026-03-28 01:29:08.621413 | orchestrator | changed: [localhost] => (item=test-1) 2026-03-28 01:29:08.621419 | orchestrator | changed: [localhost] => (item=test-2) 2026-03-28 01:29:08.621440 | orchestrator | changed: [localhost] => (item=test-3) 2026-03-28 01:29:08.621446 | orchestrator | changed: [localhost] => (item=test-4) 2026-03-28 01:29:08.621479 | orchestrator | 2026-03-28 01:29:08.621486 | orchestrator | TASK [Wait for tags to be added] *********************************************** 2026-03-28 01:29:08.621493 | orchestrator | Saturday 28 March 2026 01:28:28 +0000 (0:00:05.828) 0:03:02.017 ******** 2026-03-28 01:29:08.621499 | orchestrator | FAILED - RETRYING: [localhost]: Wait for tags to be added (30 retries left). 2026-03-28 01:29:08.621506 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j746755363537.3220', 'results_file': '/ansible/.ansible_async/j746755363537.3220', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-03-28 01:29:08.621513 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j180441859654.3245', 'results_file': '/ansible/.ansible_async/j180441859654.3245', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-03-28 01:29:08.621519 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j786683221753.3271', 'results_file': '/ansible/.ansible_async/j786683221753.3271', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-03-28 01:29:08.621524 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j454766760417.3297', 'results_file': '/ansible/.ansible_async/j454766760417.3297', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-03-28 01:29:08.621550 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j188644965479.3323', 'results_file': '/ansible/.ansible_async/j188644965479.3323', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-03-28 01:29:08.621556 | orchestrator | 2026-03-28 01:29:08.621562 | orchestrator | TASK [Create test volume] ****************************************************** 2026-03-28 01:29:08.621568 | orchestrator | Saturday 28 March 2026 01:28:39 +0000 (0:00:11.239) 0:03:13.256 ******** 2026-03-28 01:29:08.621574 | orchestrator | changed: [localhost] 2026-03-28 01:29:08.621580 | orchestrator | 2026-03-28 01:29:08.621586 | orchestrator | TASK [Attach test volume] ****************************************************** 2026-03-28 01:29:08.621591 | orchestrator | Saturday 28 March 2026 01:28:47 +0000 (0:00:07.652) 0:03:20.909 ******** 2026-03-28 01:29:08.621597 | orchestrator | changed: [localhost] 2026-03-28 01:29:08.621603 | orchestrator | 2026-03-28 01:29:08.621609 | orchestrator | TASK [Create floating ip address] ********************************************** 2026-03-28 01:29:08.621614 | orchestrator | Saturday 28 March 2026 01:29:02 +0000 (0:00:14.854) 0:03:35.763 ******** 2026-03-28 01:29:08.621620 | orchestrator | ok: [localhost] 2026-03-28 01:29:08.621626 | orchestrator | 2026-03-28 01:29:08.621633 | orchestrator | TASK [Print floating ip address] *********************************************** 2026-03-28 01:29:08.621642 | orchestrator | Saturday 28 March 2026 01:29:08 +0000 (0:00:05.879) 0:03:41.642 ******** 2026-03-28 01:29:08.621652 | orchestrator | ok: [localhost] => { 2026-03-28 01:29:08.621661 | orchestrator |  "msg": "192.168.112.196" 2026-03-28 01:29:08.621670 | orchestrator | } 2026-03-28 01:29:08.621680 | orchestrator | 2026-03-28 01:29:08.621688 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:29:08.621716 | orchestrator | localhost : ok=26  changed=23  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-28 01:29:08.621726 | orchestrator | 2026-03-28 01:29:08.621735 | orchestrator | 2026-03-28 01:29:08.621743 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:29:08.621751 | orchestrator | Saturday 28 March 2026 01:29:08 +0000 (0:00:00.059) 0:03:41.701 ******** 2026-03-28 01:29:08.621761 | orchestrator | =============================================================================== 2026-03-28 01:29:08.621770 | orchestrator | Wait for instance creation to complete --------------------------------- 58.56s 2026-03-28 01:29:08.621780 | orchestrator | Attach test volume ----------------------------------------------------- 14.85s 2026-03-28 01:29:08.621800 | orchestrator | Add member roles to user test ------------------------------------------ 13.35s 2026-03-28 01:29:08.621815 | orchestrator | Create test router ----------------------------------------------------- 12.62s 2026-03-28 01:29:08.621826 | orchestrator | Wait for tags to be added ---------------------------------------------- 11.24s 2026-03-28 01:29:08.621836 | orchestrator | Wait for metadata to be added ------------------------------------------ 10.47s 2026-03-28 01:29:08.621846 | orchestrator | Create test volume ------------------------------------------------------ 7.65s 2026-03-28 01:29:08.621876 | orchestrator | Add manager role to user test-admin ------------------------------------- 7.19s 2026-03-28 01:29:08.621887 | orchestrator | Create test subnet ------------------------------------------------------ 6.12s 2026-03-28 01:29:08.621897 | orchestrator | Create floating ip address ---------------------------------------------- 5.88s 2026-03-28 01:29:08.621905 | orchestrator | Add tag to instances ---------------------------------------------------- 5.83s 2026-03-28 01:29:08.621912 | orchestrator | Create test network ----------------------------------------------------- 5.54s 2026-03-28 01:29:08.621919 | orchestrator | Create ssh security group ----------------------------------------------- 5.43s 2026-03-28 01:29:08.621926 | orchestrator | Create test instances --------------------------------------------------- 5.34s 2026-03-28 01:29:08.621932 | orchestrator | Create test server group ------------------------------------------------ 5.22s 2026-03-28 01:29:08.621939 | orchestrator | Add metadata to instances ----------------------------------------------- 4.90s 2026-03-28 01:29:08.621946 | orchestrator | Create test-admin user -------------------------------------------------- 4.72s 2026-03-28 01:29:08.621953 | orchestrator | Create test project ----------------------------------------------------- 4.69s 2026-03-28 01:29:08.621959 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.67s 2026-03-28 01:29:08.621965 | orchestrator | Add rule to icmp security group ----------------------------------------- 4.59s 2026-03-28 01:29:08.865748 | orchestrator | + server_list 2026-03-28 01:29:08.865874 | orchestrator | + openstack --os-cloud test server list 2026-03-28 01:29:12.705107 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-03-28 01:29:12.705195 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2026-03-28 01:29:12.705208 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-03-28 01:29:12.705219 | orchestrator | | 46614906-bcda-4f8c-8e5d-c11b62623981 | test-4 | ACTIVE | test=192.168.112.151, 192.168.200.61 | N/A (booted from volume) | SCS-1L-1 | 2026-03-28 01:29:12.705229 | orchestrator | | 9055edaf-91e2-4e7e-a5d0-5eaa1516d028 | test-3 | ACTIVE | test=192.168.112.116, 192.168.200.207 | N/A (booted from volume) | SCS-1L-1 | 2026-03-28 01:29:12.705236 | orchestrator | | 9c067b53-5604-437f-b0b4-63a21d56ddf6 | test-2 | ACTIVE | test=192.168.112.108, 192.168.200.215 | N/A (booted from volume) | SCS-1L-1 | 2026-03-28 01:29:12.705242 | orchestrator | | a77a85b1-f465-404c-94d1-e65d9b71e4d3 | test | ACTIVE | test=192.168.112.196, 192.168.200.83 | N/A (booted from volume) | SCS-1L-1 | 2026-03-28 01:29:12.705248 | orchestrator | | ba83cc16-8ed3-42ff-965f-e177e7c4c4bf | test-1 | ACTIVE | test=192.168.112.106, 192.168.200.244 | N/A (booted from volume) | SCS-1L-1 | 2026-03-28 01:29:12.705253 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-03-28 01:29:13.020875 | orchestrator | + openstack --os-cloud test server show test 2026-03-28 01:29:16.454261 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-28 01:29:16.454424 | orchestrator | | Field | Value | 2026-03-28 01:29:16.454437 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-28 01:29:16.454448 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-28 01:29:16.454455 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-28 01:29:16.454463 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-28 01:29:16.454469 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2026-03-28 01:29:16.454477 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-28 01:29:16.454487 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-28 01:29:16.454507 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-28 01:29:16.454514 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-28 01:29:16.454544 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-28 01:29:16.454552 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-28 01:29:16.454565 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-28 01:29:16.454573 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-28 01:29:16.454581 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-28 01:29:16.454589 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-28 01:29:16.454596 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-28 01:29:16.454604 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-28T01:27:42.000000 | 2026-03-28 01:29:16.454629 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-28 01:29:16.454643 | orchestrator | | accessIPv4 | | 2026-03-28 01:29:16.454650 | orchestrator | | accessIPv6 | | 2026-03-28 01:29:16.454657 | orchestrator | | addresses | test=192.168.112.196, 192.168.200.83 | 2026-03-28 01:29:16.454667 | orchestrator | | config_drive | | 2026-03-28 01:29:16.454674 | orchestrator | | created | 2026-03-28T01:27:13Z | 2026-03-28 01:29:16.454682 | orchestrator | | description | None | 2026-03-28 01:29:16.454689 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-28 01:29:16.454696 | orchestrator | | hostId | 14fda089b4cf1b2b72d83922109996c71832c728c42138903d964b08 | 2026-03-28 01:29:16.454703 | orchestrator | | host_status | None | 2026-03-28 01:29:16.454724 | orchestrator | | id | a77a85b1-f465-404c-94d1-e65d9b71e4d3 | 2026-03-28 01:29:16.454731 | orchestrator | | image | N/A (booted from volume) | 2026-03-28 01:29:16.454738 | orchestrator | | key_name | test | 2026-03-28 01:29:16.454744 | orchestrator | | locked | False | 2026-03-28 01:29:16.454754 | orchestrator | | locked_reason | None | 2026-03-28 01:29:16.454761 | orchestrator | | name | test | 2026-03-28 01:29:16.454768 | orchestrator | | pinned_availability_zone | None | 2026-03-28 01:29:16.454774 | orchestrator | | progress | 0 | 2026-03-28 01:29:16.454781 | orchestrator | | project_id | 5f293c5d154c4208b36efe38b7e7f575 | 2026-03-28 01:29:16.454792 | orchestrator | | properties | hostname='test' | 2026-03-28 01:29:16.454803 | orchestrator | | security_groups | name='icmp' | 2026-03-28 01:29:16.454810 | orchestrator | | | name='ssh' | 2026-03-28 01:29:16.454817 | orchestrator | | server_groups | None | 2026-03-28 01:29:16.454824 | orchestrator | | status | ACTIVE | 2026-03-28 01:29:16.454832 | orchestrator | | tags | test | 2026-03-28 01:29:16.454844 | orchestrator | | trusted_image_certificates | None | 2026-03-28 01:29:16.454852 | orchestrator | | updated | 2026-03-28T01:28:13Z | 2026-03-28 01:29:16.454859 | orchestrator | | user_id | d706ecdf18354e7a8130872081a18bf9 | 2026-03-28 01:29:16.454866 | orchestrator | | volumes_attached | delete_on_termination='True', id='54388a7a-2982-4e03-8fd3-ee3932e25611' | 2026-03-28 01:29:16.454881 | orchestrator | | | delete_on_termination='False', id='f82ab172-1178-46ec-9a24-7a335984be23' | 2026-03-28 01:29:16.459938 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-28 01:29:16.834500 | orchestrator | + openstack --os-cloud test server show test-1 2026-03-28 01:29:20.072853 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-28 01:29:20.072989 | orchestrator | | Field | Value | 2026-03-28 01:29:20.073028 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-28 01:29:20.073050 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-28 01:29:20.073070 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-28 01:29:20.073089 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-28 01:29:20.073133 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2026-03-28 01:29:20.073179 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-28 01:29:20.073198 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-28 01:29:20.073244 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-28 01:29:20.073266 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-28 01:29:20.073352 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-28 01:29:20.073381 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-28 01:29:20.073395 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-28 01:29:20.073409 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-28 01:29:20.073422 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-28 01:29:20.073445 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-28 01:29:20.073457 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-28 01:29:20.073468 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-28T01:27:42.000000 | 2026-03-28 01:29:20.073489 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-28 01:29:20.073501 | orchestrator | | accessIPv4 | | 2026-03-28 01:29:20.073512 | orchestrator | | accessIPv6 | | 2026-03-28 01:29:20.073570 | orchestrator | | addresses | test=192.168.112.106, 192.168.200.244 | 2026-03-28 01:29:20.073583 | orchestrator | | config_drive | | 2026-03-28 01:29:20.073594 | orchestrator | | created | 2026-03-28T01:27:13Z | 2026-03-28 01:29:20.073612 | orchestrator | | description | None | 2026-03-28 01:29:20.073623 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-28 01:29:20.073635 | orchestrator | | hostId | 14fda089b4cf1b2b72d83922109996c71832c728c42138903d964b08 | 2026-03-28 01:29:20.073645 | orchestrator | | host_status | None | 2026-03-28 01:29:20.073664 | orchestrator | | id | ba83cc16-8ed3-42ff-965f-e177e7c4c4bf | 2026-03-28 01:29:20.073676 | orchestrator | | image | N/A (booted from volume) | 2026-03-28 01:29:20.073687 | orchestrator | | key_name | test | 2026-03-28 01:29:20.073703 | orchestrator | | locked | False | 2026-03-28 01:29:20.073714 | orchestrator | | locked_reason | None | 2026-03-28 01:29:20.073725 | orchestrator | | name | test-1 | 2026-03-28 01:29:20.073742 | orchestrator | | pinned_availability_zone | None | 2026-03-28 01:29:20.073753 | orchestrator | | progress | 0 | 2026-03-28 01:29:20.073765 | orchestrator | | project_id | 5f293c5d154c4208b36efe38b7e7f575 | 2026-03-28 01:29:20.073776 | orchestrator | | properties | hostname='test-1' | 2026-03-28 01:29:20.073795 | orchestrator | | security_groups | name='icmp' | 2026-03-28 01:29:20.073807 | orchestrator | | | name='ssh' | 2026-03-28 01:29:20.073818 | orchestrator | | server_groups | None | 2026-03-28 01:29:20.073833 | orchestrator | | status | ACTIVE | 2026-03-28 01:29:20.073845 | orchestrator | | tags | test | 2026-03-28 01:29:20.073862 | orchestrator | | trusted_image_certificates | None | 2026-03-28 01:29:20.073873 | orchestrator | | updated | 2026-03-28T01:28:14Z | 2026-03-28 01:29:20.073884 | orchestrator | | user_id | d706ecdf18354e7a8130872081a18bf9 | 2026-03-28 01:29:20.073894 | orchestrator | | volumes_attached | delete_on_termination='True', id='0406c5cb-27cb-4c28-bd65-5b1066764003' | 2026-03-28 01:29:20.077628 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-28 01:29:20.426857 | orchestrator | + openstack --os-cloud test server show test-2 2026-03-28 01:29:23.477465 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-28 01:29:23.477612 | orchestrator | | Field | Value | 2026-03-28 01:29:23.477627 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-28 01:29:23.477636 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-28 01:29:23.477681 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-28 01:29:23.477691 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-28 01:29:23.477699 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2026-03-28 01:29:23.477707 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-28 01:29:23.477715 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-28 01:29:23.477739 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-28 01:29:23.477748 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-28 01:29:23.477756 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-28 01:29:23.477764 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-28 01:29:23.477782 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-28 01:29:23.477790 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-28 01:29:23.477799 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-28 01:29:23.477807 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-28 01:29:23.477815 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-28 01:29:23.477823 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-28T01:27:42.000000 | 2026-03-28 01:29:23.477836 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-28 01:29:23.477844 | orchestrator | | accessIPv4 | | 2026-03-28 01:29:23.477852 | orchestrator | | accessIPv6 | | 2026-03-28 01:29:23.477869 | orchestrator | | addresses | test=192.168.112.108, 192.168.200.215 | 2026-03-28 01:29:23.477878 | orchestrator | | config_drive | | 2026-03-28 01:29:23.477886 | orchestrator | | created | 2026-03-28T01:27:14Z | 2026-03-28 01:29:23.477894 | orchestrator | | description | None | 2026-03-28 01:29:23.477902 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-28 01:29:23.477910 | orchestrator | | hostId | e9b5086864dd88f531ad55c3ec73359b7d60ae92a0a06e24ed963bd5 | 2026-03-28 01:29:23.477918 | orchestrator | | host_status | None | 2026-03-28 01:29:23.477931 | orchestrator | | id | 9c067b53-5604-437f-b0b4-63a21d56ddf6 | 2026-03-28 01:29:23.477940 | orchestrator | | image | N/A (booted from volume) | 2026-03-28 01:29:23.477948 | orchestrator | | key_name | test | 2026-03-28 01:29:23.477965 | orchestrator | | locked | False | 2026-03-28 01:29:23.477974 | orchestrator | | locked_reason | None | 2026-03-28 01:29:23.477982 | orchestrator | | name | test-2 | 2026-03-28 01:29:23.477990 | orchestrator | | pinned_availability_zone | None | 2026-03-28 01:29:23.478004 | orchestrator | | progress | 0 | 2026-03-28 01:29:23.478078 | orchestrator | | project_id | 5f293c5d154c4208b36efe38b7e7f575 | 2026-03-28 01:29:23.478094 | orchestrator | | properties | hostname='test-2' | 2026-03-28 01:29:23.478116 | orchestrator | | security_groups | name='icmp' | 2026-03-28 01:29:23.478131 | orchestrator | | | name='ssh' | 2026-03-28 01:29:23.478150 | orchestrator | | server_groups | None | 2026-03-28 01:29:23.478164 | orchestrator | | status | ACTIVE | 2026-03-28 01:29:23.478174 | orchestrator | | tags | test | 2026-03-28 01:29:23.478183 | orchestrator | | trusted_image_certificates | None | 2026-03-28 01:29:23.478193 | orchestrator | | updated | 2026-03-28T01:28:15Z | 2026-03-28 01:29:23.478202 | orchestrator | | user_id | d706ecdf18354e7a8130872081a18bf9 | 2026-03-28 01:29:23.478211 | orchestrator | | volumes_attached | delete_on_termination='True', id='14ffbddd-f4bb-4897-93ac-d6a6c1a43a2b' | 2026-03-28 01:29:23.483308 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-28 01:29:23.782846 | orchestrator | + openstack --os-cloud test server show test-3 2026-03-28 01:29:27.060724 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-28 01:29:27.060832 | orchestrator | | Field | Value | 2026-03-28 01:29:27.060841 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-28 01:29:27.060860 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-28 01:29:27.060869 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-28 01:29:27.060883 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-28 01:29:27.060893 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2026-03-28 01:29:27.060902 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-28 01:29:27.060911 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-28 01:29:27.060938 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-28 01:29:27.060956 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-28 01:29:27.060966 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-28 01:29:27.060976 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-28 01:29:27.060986 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-28 01:29:27.060994 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-28 01:29:27.061000 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-28 01:29:27.061006 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-28 01:29:27.061011 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-28 01:29:27.061027 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-28T01:27:42.000000 | 2026-03-28 01:29:27.061044 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-28 01:29:27.061050 | orchestrator | | accessIPv4 | | 2026-03-28 01:29:27.061055 | orchestrator | | accessIPv6 | | 2026-03-28 01:29:27.061434 | orchestrator | | addresses | test=192.168.112.116, 192.168.200.207 | 2026-03-28 01:29:27.061459 | orchestrator | | config_drive | | 2026-03-28 01:29:27.061469 | orchestrator | | created | 2026-03-28T01:27:15Z | 2026-03-28 01:29:27.061480 | orchestrator | | description | None | 2026-03-28 01:29:27.061490 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-28 01:29:27.061500 | orchestrator | | hostId | e9b5086864dd88f531ad55c3ec73359b7d60ae92a0a06e24ed963bd5 | 2026-03-28 01:29:27.061510 | orchestrator | | host_status | None | 2026-03-28 01:29:27.061574 | orchestrator | | id | 9055edaf-91e2-4e7e-a5d0-5eaa1516d028 | 2026-03-28 01:29:27.061591 | orchestrator | | image | N/A (booted from volume) | 2026-03-28 01:29:27.061602 | orchestrator | | key_name | test | 2026-03-28 01:29:27.061612 | orchestrator | | locked | False | 2026-03-28 01:29:27.061622 | orchestrator | | locked_reason | None | 2026-03-28 01:29:27.061630 | orchestrator | | name | test-3 | 2026-03-28 01:29:27.061641 | orchestrator | | pinned_availability_zone | None | 2026-03-28 01:29:27.061650 | orchestrator | | progress | 0 | 2026-03-28 01:29:27.061659 | orchestrator | | project_id | 5f293c5d154c4208b36efe38b7e7f575 | 2026-03-28 01:29:27.061675 | orchestrator | | properties | hostname='test-3' | 2026-03-28 01:29:27.061691 | orchestrator | | security_groups | name='icmp' | 2026-03-28 01:29:27.061706 | orchestrator | | | name='ssh' | 2026-03-28 01:29:27.061712 | orchestrator | | server_groups | None | 2026-03-28 01:29:27.061717 | orchestrator | | status | ACTIVE | 2026-03-28 01:29:27.061723 | orchestrator | | tags | test | 2026-03-28 01:29:27.061729 | orchestrator | | trusted_image_certificates | None | 2026-03-28 01:29:27.061734 | orchestrator | | updated | 2026-03-28T01:28:16Z | 2026-03-28 01:29:27.061740 | orchestrator | | user_id | d706ecdf18354e7a8130872081a18bf9 | 2026-03-28 01:29:27.061750 | orchestrator | | volumes_attached | delete_on_termination='True', id='481e8309-bf48-46f1-8626-4c6553913978' | 2026-03-28 01:29:27.066055 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-28 01:29:27.384676 | orchestrator | + openstack --os-cloud test server show test-4 2026-03-28 01:29:30.597944 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-28 01:29:30.598795 | orchestrator | | Field | Value | 2026-03-28 01:29:30.598825 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-28 01:29:30.598832 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-28 01:29:30.598838 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-28 01:29:30.598843 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-28 01:29:30.598848 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2026-03-28 01:29:30.598867 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-28 01:29:30.598873 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-28 01:29:30.598894 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-28 01:29:30.598900 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-28 01:29:30.598909 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-28 01:29:30.598915 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-28 01:29:30.598920 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-28 01:29:30.598925 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-28 01:29:30.598931 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-28 01:29:30.598940 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-28 01:29:30.598945 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-28 01:29:30.598951 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-28T01:27:43.000000 | 2026-03-28 01:29:30.598961 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-28 01:29:30.598969 | orchestrator | | accessIPv4 | | 2026-03-28 01:29:30.598974 | orchestrator | | accessIPv6 | | 2026-03-28 01:29:30.598980 | orchestrator | | addresses | test=192.168.112.151, 192.168.200.61 | 2026-03-28 01:29:30.598985 | orchestrator | | config_drive | | 2026-03-28 01:29:30.598990 | orchestrator | | created | 2026-03-28T01:27:16Z | 2026-03-28 01:29:30.598995 | orchestrator | | description | None | 2026-03-28 01:29:30.599005 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-28 01:29:30.599010 | orchestrator | | hostId | 14fda089b4cf1b2b72d83922109996c71832c728c42138903d964b08 | 2026-03-28 01:29:30.599015 | orchestrator | | host_status | None | 2026-03-28 01:29:30.599026 | orchestrator | | id | 46614906-bcda-4f8c-8e5d-c11b62623981 | 2026-03-28 01:29:30.599034 | orchestrator | | image | N/A (booted from volume) | 2026-03-28 01:29:30.599039 | orchestrator | | key_name | test | 2026-03-28 01:29:30.599045 | orchestrator | | locked | False | 2026-03-28 01:29:30.599050 | orchestrator | | locked_reason | None | 2026-03-28 01:29:30.599055 | orchestrator | | name | test-4 | 2026-03-28 01:29:30.599064 | orchestrator | | pinned_availability_zone | None | 2026-03-28 01:29:30.599069 | orchestrator | | progress | 0 | 2026-03-28 01:29:30.599075 | orchestrator | | project_id | 5f293c5d154c4208b36efe38b7e7f575 | 2026-03-28 01:29:30.599080 | orchestrator | | properties | hostname='test-4' | 2026-03-28 01:29:30.599091 | orchestrator | | security_groups | name='icmp' | 2026-03-28 01:29:30.599096 | orchestrator | | | name='ssh' | 2026-03-28 01:29:30.599102 | orchestrator | | server_groups | None | 2026-03-28 01:29:30.599107 | orchestrator | | status | ACTIVE | 2026-03-28 01:29:30.599112 | orchestrator | | tags | test | 2026-03-28 01:29:30.599125 | orchestrator | | trusted_image_certificates | None | 2026-03-28 01:29:30.599131 | orchestrator | | updated | 2026-03-28T01:28:17Z | 2026-03-28 01:29:30.599136 | orchestrator | | user_id | d706ecdf18354e7a8130872081a18bf9 | 2026-03-28 01:29:30.599141 | orchestrator | | volumes_attached | delete_on_termination='True', id='0a865887-2827-4b2a-b22f-1939affcf7e4' | 2026-03-28 01:29:30.602719 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-28 01:29:30.937937 | orchestrator | + server_ping 2026-03-28 01:29:30.939693 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-03-28 01:29:30.939747 | orchestrator | ++ tr -d '\r' 2026-03-28 01:29:34.077552 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-28 01:29:34.077640 | orchestrator | + ping -c3 192.168.112.106 2026-03-28 01:29:34.089748 | orchestrator | PING 192.168.112.106 (192.168.112.106) 56(84) bytes of data. 2026-03-28 01:29:34.089824 | orchestrator | 64 bytes from 192.168.112.106: icmp_seq=1 ttl=63 time=5.85 ms 2026-03-28 01:29:35.087457 | orchestrator | 64 bytes from 192.168.112.106: icmp_seq=2 ttl=63 time=2.05 ms 2026-03-28 01:29:36.089011 | orchestrator | 64 bytes from 192.168.112.106: icmp_seq=3 ttl=63 time=1.88 ms 2026-03-28 01:29:36.089481 | orchestrator | 2026-03-28 01:29:36.090269 | orchestrator | --- 192.168.112.106 ping statistics --- 2026-03-28 01:29:36.090346 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-28 01:29:36.090381 | orchestrator | rtt min/avg/max/mdev = 1.878/3.260/5.852/1.833 ms 2026-03-28 01:29:36.090392 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-28 01:29:36.090401 | orchestrator | + ping -c3 192.168.112.196 2026-03-28 01:29:36.105763 | orchestrator | PING 192.168.112.196 (192.168.112.196) 56(84) bytes of data. 2026-03-28 01:29:36.105864 | orchestrator | 64 bytes from 192.168.112.196: icmp_seq=1 ttl=63 time=10.0 ms 2026-03-28 01:29:37.099231 | orchestrator | 64 bytes from 192.168.112.196: icmp_seq=2 ttl=63 time=2.30 ms 2026-03-28 01:29:38.100817 | orchestrator | 64 bytes from 192.168.112.196: icmp_seq=3 ttl=63 time=2.00 ms 2026-03-28 01:29:38.100896 | orchestrator | 2026-03-28 01:29:38.100904 | orchestrator | --- 192.168.112.196 ping statistics --- 2026-03-28 01:29:38.100911 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-28 01:29:38.100939 | orchestrator | rtt min/avg/max/mdev = 1.998/4.768/10.005/3.704 ms 2026-03-28 01:29:38.101263 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-28 01:29:38.101279 | orchestrator | + ping -c3 192.168.112.108 2026-03-28 01:29:38.112792 | orchestrator | PING 192.168.112.108 (192.168.112.108) 56(84) bytes of data. 2026-03-28 01:29:38.112873 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=1 ttl=63 time=8.16 ms 2026-03-28 01:29:39.108832 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=2 ttl=63 time=2.71 ms 2026-03-28 01:29:40.111065 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=3 ttl=63 time=2.15 ms 2026-03-28 01:29:40.111150 | orchestrator | 2026-03-28 01:29:40.111159 | orchestrator | --- 192.168.112.108 ping statistics --- 2026-03-28 01:29:40.111167 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-28 01:29:40.111175 | orchestrator | rtt min/avg/max/mdev = 2.153/4.342/8.164/2.711 ms 2026-03-28 01:29:40.111185 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-28 01:29:40.111194 | orchestrator | + ping -c3 192.168.112.116 2026-03-28 01:29:40.121978 | orchestrator | PING 192.168.112.116 (192.168.112.116) 56(84) bytes of data. 2026-03-28 01:29:40.122120 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=1 ttl=63 time=7.18 ms 2026-03-28 01:29:41.117590 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=2 ttl=63 time=2.38 ms 2026-03-28 01:29:42.119183 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=3 ttl=63 time=1.95 ms 2026-03-28 01:29:42.119309 | orchestrator | 2026-03-28 01:29:42.119334 | orchestrator | --- 192.168.112.116 ping statistics --- 2026-03-28 01:29:42.119355 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2001ms 2026-03-28 01:29:42.119373 | orchestrator | rtt min/avg/max/mdev = 1.951/3.837/7.181/2.370 ms 2026-03-28 01:29:42.119393 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-28 01:29:42.119412 | orchestrator | + ping -c3 192.168.112.151 2026-03-28 01:29:42.128373 | orchestrator | PING 192.168.112.151 (192.168.112.151) 56(84) bytes of data. 2026-03-28 01:29:42.128484 | orchestrator | 64 bytes from 192.168.112.151: icmp_seq=1 ttl=63 time=6.00 ms 2026-03-28 01:29:43.124838 | orchestrator | 64 bytes from 192.168.112.151: icmp_seq=2 ttl=63 time=1.89 ms 2026-03-28 01:29:44.127988 | orchestrator | 64 bytes from 192.168.112.151: icmp_seq=3 ttl=63 time=2.06 ms 2026-03-28 01:29:44.128066 | orchestrator | 2026-03-28 01:29:44.128077 | orchestrator | --- 192.168.112.151 ping statistics --- 2026-03-28 01:29:44.128086 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-28 01:29:44.128093 | orchestrator | rtt min/avg/max/mdev = 1.890/3.316/5.999/1.898 ms 2026-03-28 01:29:44.128100 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-28 01:29:44.128107 | orchestrator | + compute_list 2026-03-28 01:29:44.128113 | orchestrator | + osism manage compute list testbed-node-3 2026-03-28 01:29:45.835719 | orchestrator | 2026-03-28 01:29:45 | ERROR  | Unable to get ansible vault password 2026-03-28 01:29:45.835856 | orchestrator | 2026-03-28 01:29:45 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-28 01:29:45.835885 | orchestrator | 2026-03-28 01:29:45 | ERROR  | Dropping encrypted entries 2026-03-28 01:29:49.668228 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-28 01:29:49.668367 | orchestrator | | ID | Name | Status | 2026-03-28 01:29:49.668393 | orchestrator | |--------------------------------------+--------+----------| 2026-03-28 01:29:49.668413 | orchestrator | | 46614906-bcda-4f8c-8e5d-c11b62623981 | test-4 | ACTIVE | 2026-03-28 01:29:49.668425 | orchestrator | | a77a85b1-f465-404c-94d1-e65d9b71e4d3 | test | ACTIVE | 2026-03-28 01:29:49.668436 | orchestrator | | ba83cc16-8ed3-42ff-965f-e177e7c4c4bf | test-1 | ACTIVE | 2026-03-28 01:29:49.668447 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-28 01:29:50.029838 | orchestrator | + osism manage compute list testbed-node-4 2026-03-28 01:29:51.849424 | orchestrator | 2026-03-28 01:29:51 | ERROR  | Unable to get ansible vault password 2026-03-28 01:29:51.849634 | orchestrator | 2026-03-28 01:29:51 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-28 01:29:51.849657 | orchestrator | 2026-03-28 01:29:51 | ERROR  | Dropping encrypted entries 2026-03-28 01:29:53.522423 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-28 01:29:53.522535 | orchestrator | | ID | Name | Status | 2026-03-28 01:29:53.522543 | orchestrator | |--------------------------------------+--------+----------| 2026-03-28 01:29:53.522549 | orchestrator | | 9055edaf-91e2-4e7e-a5d0-5eaa1516d028 | test-3 | ACTIVE | 2026-03-28 01:29:53.522553 | orchestrator | | 9c067b53-5604-437f-b0b4-63a21d56ddf6 | test-2 | ACTIVE | 2026-03-28 01:29:53.522558 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-28 01:29:53.889013 | orchestrator | + osism manage compute list testbed-node-5 2026-03-28 01:29:55.674347 | orchestrator | 2026-03-28 01:29:55 | ERROR  | Unable to get ansible vault password 2026-03-28 01:29:55.674451 | orchestrator | 2026-03-28 01:29:55 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-28 01:29:55.674465 | orchestrator | 2026-03-28 01:29:55 | ERROR  | Dropping encrypted entries 2026-03-28 01:29:56.979742 | orchestrator | +------+--------+----------+ 2026-03-28 01:29:56.979866 | orchestrator | | ID | Name | Status | 2026-03-28 01:29:56.979892 | orchestrator | |------+--------+----------| 2026-03-28 01:29:56.979910 | orchestrator | +------+--------+----------+ 2026-03-28 01:29:57.347684 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-4 2026-03-28 01:29:59.019545 | orchestrator | 2026-03-28 01:29:59 | ERROR  | Unable to get ansible vault password 2026-03-28 01:29:59.019634 | orchestrator | 2026-03-28 01:29:59 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-28 01:29:59.019644 | orchestrator | 2026-03-28 01:29:59 | ERROR  | Dropping encrypted entries 2026-03-28 01:30:00.598886 | orchestrator | 2026-03-28 01:30:00 | INFO  | Live migrating server 9055edaf-91e2-4e7e-a5d0-5eaa1516d028 2026-03-28 01:30:13.844032 | orchestrator | 2026-03-28 01:30:13 | INFO  | Live migration of 9055edaf-91e2-4e7e-a5d0-5eaa1516d028 (test-3) is still in progress 2026-03-28 01:30:16.320661 | orchestrator | 2026-03-28 01:30:16 | INFO  | Live migration of 9055edaf-91e2-4e7e-a5d0-5eaa1516d028 (test-3) is still in progress 2026-03-28 01:30:18.715166 | orchestrator | 2026-03-28 01:30:18 | INFO  | Live migration of 9055edaf-91e2-4e7e-a5d0-5eaa1516d028 (test-3) is still in progress 2026-03-28 01:30:21.054266 | orchestrator | 2026-03-28 01:30:21 | INFO  | Live migration of 9055edaf-91e2-4e7e-a5d0-5eaa1516d028 (test-3) is still in progress 2026-03-28 01:30:23.474166 | orchestrator | 2026-03-28 01:30:23 | INFO  | Live migration of 9055edaf-91e2-4e7e-a5d0-5eaa1516d028 (test-3) is still in progress 2026-03-28 01:30:25.973087 | orchestrator | 2026-03-28 01:30:25 | INFO  | Live migration of 9055edaf-91e2-4e7e-a5d0-5eaa1516d028 (test-3) is still in progress 2026-03-28 01:30:28.290015 | orchestrator | 2026-03-28 01:30:28 | INFO  | Live migration of 9055edaf-91e2-4e7e-a5d0-5eaa1516d028 (test-3) is still in progress 2026-03-28 01:30:30.623089 | orchestrator | 2026-03-28 01:30:30 | INFO  | Live migration of 9055edaf-91e2-4e7e-a5d0-5eaa1516d028 (test-3) is still in progress 2026-03-28 01:30:32.930682 | orchestrator | 2026-03-28 01:30:32 | INFO  | Live migration of 9055edaf-91e2-4e7e-a5d0-5eaa1516d028 (test-3) completed with status ACTIVE 2026-03-28 01:30:32.930758 | orchestrator | 2026-03-28 01:30:32 | INFO  | Live migrating server 9c067b53-5604-437f-b0b4-63a21d56ddf6 2026-03-28 01:30:43.904773 | orchestrator | 2026-03-28 01:30:43 | INFO  | Live migration of 9c067b53-5604-437f-b0b4-63a21d56ddf6 (test-2) is still in progress 2026-03-28 01:30:46.252963 | orchestrator | 2026-03-28 01:30:46 | INFO  | Live migration of 9c067b53-5604-437f-b0b4-63a21d56ddf6 (test-2) is still in progress 2026-03-28 01:30:48.606422 | orchestrator | 2026-03-28 01:30:48 | INFO  | Live migration of 9c067b53-5604-437f-b0b4-63a21d56ddf6 (test-2) is still in progress 2026-03-28 01:30:50.889084 | orchestrator | 2026-03-28 01:30:50 | INFO  | Live migration of 9c067b53-5604-437f-b0b4-63a21d56ddf6 (test-2) is still in progress 2026-03-28 01:30:53.256766 | orchestrator | 2026-03-28 01:30:53 | INFO  | Live migration of 9c067b53-5604-437f-b0b4-63a21d56ddf6 (test-2) is still in progress 2026-03-28 01:30:55.676848 | orchestrator | 2026-03-28 01:30:55 | INFO  | Live migration of 9c067b53-5604-437f-b0b4-63a21d56ddf6 (test-2) is still in progress 2026-03-28 01:30:58.068163 | orchestrator | 2026-03-28 01:30:58 | INFO  | Live migration of 9c067b53-5604-437f-b0b4-63a21d56ddf6 (test-2) is still in progress 2026-03-28 01:31:00.447030 | orchestrator | 2026-03-28 01:31:00 | INFO  | Live migration of 9c067b53-5604-437f-b0b4-63a21d56ddf6 (test-2) is still in progress 2026-03-28 01:31:02.771416 | orchestrator | 2026-03-28 01:31:02 | INFO  | Live migration of 9c067b53-5604-437f-b0b4-63a21d56ddf6 (test-2) completed with status ACTIVE 2026-03-28 01:31:03.163340 | orchestrator | + compute_list 2026-03-28 01:31:03.163462 | orchestrator | + osism manage compute list testbed-node-3 2026-03-28 01:31:04.903369 | orchestrator | 2026-03-28 01:31:04 | ERROR  | Unable to get ansible vault password 2026-03-28 01:31:04.903574 | orchestrator | 2026-03-28 01:31:04 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-28 01:31:04.904956 | orchestrator | 2026-03-28 01:31:04 | ERROR  | Dropping encrypted entries 2026-03-28 01:31:06.592853 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-28 01:31:06.592978 | orchestrator | | ID | Name | Status | 2026-03-28 01:31:06.592994 | orchestrator | |--------------------------------------+--------+----------| 2026-03-28 01:31:06.593038 | orchestrator | | 46614906-bcda-4f8c-8e5d-c11b62623981 | test-4 | ACTIVE | 2026-03-28 01:31:06.593059 | orchestrator | | 9055edaf-91e2-4e7e-a5d0-5eaa1516d028 | test-3 | ACTIVE | 2026-03-28 01:31:06.593078 | orchestrator | | 9c067b53-5604-437f-b0b4-63a21d56ddf6 | test-2 | ACTIVE | 2026-03-28 01:31:06.593097 | orchestrator | | a77a85b1-f465-404c-94d1-e65d9b71e4d3 | test | ACTIVE | 2026-03-28 01:31:06.593116 | orchestrator | | ba83cc16-8ed3-42ff-965f-e177e7c4c4bf | test-1 | ACTIVE | 2026-03-28 01:31:06.593129 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-28 01:31:06.954861 | orchestrator | + osism manage compute list testbed-node-4 2026-03-28 01:31:08.576539 | orchestrator | 2026-03-28 01:31:08 | ERROR  | Unable to get ansible vault password 2026-03-28 01:31:08.576686 | orchestrator | 2026-03-28 01:31:08 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-28 01:31:08.576708 | orchestrator | 2026-03-28 01:31:08 | ERROR  | Dropping encrypted entries 2026-03-28 01:31:09.770193 | orchestrator | +------+--------+----------+ 2026-03-28 01:31:09.770298 | orchestrator | | ID | Name | Status | 2026-03-28 01:31:09.770316 | orchestrator | |------+--------+----------| 2026-03-28 01:31:09.770329 | orchestrator | +------+--------+----------+ 2026-03-28 01:31:10.132702 | orchestrator | + osism manage compute list testbed-node-5 2026-03-28 01:31:11.847002 | orchestrator | 2026-03-28 01:31:11 | ERROR  | Unable to get ansible vault password 2026-03-28 01:31:11.847099 | orchestrator | 2026-03-28 01:31:11 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-28 01:31:11.847113 | orchestrator | 2026-03-28 01:31:11 | ERROR  | Dropping encrypted entries 2026-03-28 01:31:13.063209 | orchestrator | +------+--------+----------+ 2026-03-28 01:31:13.063327 | orchestrator | | ID | Name | Status | 2026-03-28 01:31:13.063367 | orchestrator | |------+--------+----------| 2026-03-28 01:31:13.063384 | orchestrator | +------+--------+----------+ 2026-03-28 01:31:13.425743 | orchestrator | + server_ping 2026-03-28 01:31:13.426438 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-03-28 01:31:13.426496 | orchestrator | ++ tr -d '\r' 2026-03-28 01:31:16.417152 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-28 01:31:16.417230 | orchestrator | + ping -c3 192.168.112.106 2026-03-28 01:31:16.426251 | orchestrator | PING 192.168.112.106 (192.168.112.106) 56(84) bytes of data. 2026-03-28 01:31:16.426314 | orchestrator | 64 bytes from 192.168.112.106: icmp_seq=1 ttl=63 time=6.80 ms 2026-03-28 01:31:17.424059 | orchestrator | 64 bytes from 192.168.112.106: icmp_seq=2 ttl=63 time=2.33 ms 2026-03-28 01:31:18.425918 | orchestrator | 64 bytes from 192.168.112.106: icmp_seq=3 ttl=63 time=1.72 ms 2026-03-28 01:31:18.426833 | orchestrator | 2026-03-28 01:31:18.426888 | orchestrator | --- 192.168.112.106 ping statistics --- 2026-03-28 01:31:18.426898 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-03-28 01:31:18.426905 | orchestrator | rtt min/avg/max/mdev = 1.722/3.616/6.799/2.264 ms 2026-03-28 01:31:18.426912 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-28 01:31:18.426919 | orchestrator | + ping -c3 192.168.112.196 2026-03-28 01:31:18.438520 | orchestrator | PING 192.168.112.196 (192.168.112.196) 56(84) bytes of data. 2026-03-28 01:31:18.438589 | orchestrator | 64 bytes from 192.168.112.196: icmp_seq=1 ttl=63 time=7.46 ms 2026-03-28 01:31:19.435158 | orchestrator | 64 bytes from 192.168.112.196: icmp_seq=2 ttl=63 time=2.40 ms 2026-03-28 01:31:20.436344 | orchestrator | 64 bytes from 192.168.112.196: icmp_seq=3 ttl=63 time=1.76 ms 2026-03-28 01:31:20.436430 | orchestrator | 2026-03-28 01:31:20.436441 | orchestrator | --- 192.168.112.196 ping statistics --- 2026-03-28 01:31:20.436450 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-28 01:31:20.436457 | orchestrator | rtt min/avg/max/mdev = 1.757/3.873/7.460/2.550 ms 2026-03-28 01:31:20.436466 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-28 01:31:20.436635 | orchestrator | + ping -c3 192.168.112.108 2026-03-28 01:31:20.444995 | orchestrator | PING 192.168.112.108 (192.168.112.108) 56(84) bytes of data. 2026-03-28 01:31:20.445059 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=1 ttl=63 time=4.81 ms 2026-03-28 01:31:21.444234 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=2 ttl=63 time=2.32 ms 2026-03-28 01:31:22.446302 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=3 ttl=63 time=1.82 ms 2026-03-28 01:31:22.446412 | orchestrator | 2026-03-28 01:31:22.446428 | orchestrator | --- 192.168.112.108 ping statistics --- 2026-03-28 01:31:22.446442 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-28 01:31:22.446453 | orchestrator | rtt min/avg/max/mdev = 1.822/2.982/4.808/1.306 ms 2026-03-28 01:31:22.446537 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-28 01:31:22.446553 | orchestrator | + ping -c3 192.168.112.116 2026-03-28 01:31:22.459058 | orchestrator | PING 192.168.112.116 (192.168.112.116) 56(84) bytes of data. 2026-03-28 01:31:22.459169 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=1 ttl=63 time=6.38 ms 2026-03-28 01:31:23.456549 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=2 ttl=63 time=2.18 ms 2026-03-28 01:31:24.458408 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=3 ttl=63 time=2.06 ms 2026-03-28 01:31:24.458609 | orchestrator | 2026-03-28 01:31:24.458638 | orchestrator | --- 192.168.112.116 ping statistics --- 2026-03-28 01:31:24.458660 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-03-28 01:31:24.458683 | orchestrator | rtt min/avg/max/mdev = 2.063/3.540/6.377/2.006 ms 2026-03-28 01:31:24.459082 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-28 01:31:24.459109 | orchestrator | + ping -c3 192.168.112.151 2026-03-28 01:31:24.471075 | orchestrator | PING 192.168.112.151 (192.168.112.151) 56(84) bytes of data. 2026-03-28 01:31:24.471219 | orchestrator | 64 bytes from 192.168.112.151: icmp_seq=1 ttl=63 time=6.15 ms 2026-03-28 01:31:25.469280 | orchestrator | 64 bytes from 192.168.112.151: icmp_seq=2 ttl=63 time=2.31 ms 2026-03-28 01:31:26.470573 | orchestrator | 64 bytes from 192.168.112.151: icmp_seq=3 ttl=63 time=1.65 ms 2026-03-28 01:31:26.470676 | orchestrator | 2026-03-28 01:31:26.470695 | orchestrator | --- 192.168.112.151 ping statistics --- 2026-03-28 01:31:26.470709 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-28 01:31:26.470720 | orchestrator | rtt min/avg/max/mdev = 1.650/3.369/6.149/1.983 ms 2026-03-28 01:31:26.470732 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-5 2026-03-28 01:31:28.210758 | orchestrator | 2026-03-28 01:31:28 | ERROR  | Unable to get ansible vault password 2026-03-28 01:31:28.210872 | orchestrator | 2026-03-28 01:31:28 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-28 01:31:28.210889 | orchestrator | 2026-03-28 01:31:28 | ERROR  | Dropping encrypted entries 2026-03-28 01:31:29.519959 | orchestrator | 2026-03-28 01:31:29 | INFO  | No migratable instances found on node testbed-node-5 2026-03-28 01:31:29.906361 | orchestrator | + compute_list 2026-03-28 01:31:29.906462 | orchestrator | + osism manage compute list testbed-node-3 2026-03-28 01:31:31.710053 | orchestrator | 2026-03-28 01:31:31 | ERROR  | Unable to get ansible vault password 2026-03-28 01:31:31.710138 | orchestrator | 2026-03-28 01:31:31 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-28 01:31:31.710152 | orchestrator | 2026-03-28 01:31:31 | ERROR  | Dropping encrypted entries 2026-03-28 01:31:33.329405 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-28 01:31:33.329558 | orchestrator | | ID | Name | Status | 2026-03-28 01:31:33.329570 | orchestrator | |--------------------------------------+--------+----------| 2026-03-28 01:31:33.329576 | orchestrator | | 46614906-bcda-4f8c-8e5d-c11b62623981 | test-4 | ACTIVE | 2026-03-28 01:31:33.329581 | orchestrator | | 9055edaf-91e2-4e7e-a5d0-5eaa1516d028 | test-3 | ACTIVE | 2026-03-28 01:31:33.329586 | orchestrator | | 9c067b53-5604-437f-b0b4-63a21d56ddf6 | test-2 | ACTIVE | 2026-03-28 01:31:33.329591 | orchestrator | | a77a85b1-f465-404c-94d1-e65d9b71e4d3 | test | ACTIVE | 2026-03-28 01:31:33.329597 | orchestrator | | ba83cc16-8ed3-42ff-965f-e177e7c4c4bf | test-1 | ACTIVE | 2026-03-28 01:31:33.329602 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-28 01:31:33.704086 | orchestrator | + osism manage compute list testbed-node-4 2026-03-28 01:31:35.423728 | orchestrator | 2026-03-28 01:31:35 | ERROR  | Unable to get ansible vault password 2026-03-28 01:31:35.423894 | orchestrator | 2026-03-28 01:31:35 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-28 01:31:35.423921 | orchestrator | 2026-03-28 01:31:35 | ERROR  | Dropping encrypted entries 2026-03-28 01:31:36.650930 | orchestrator | +------+--------+----------+ 2026-03-28 01:31:36.651062 | orchestrator | | ID | Name | Status | 2026-03-28 01:31:36.651087 | orchestrator | |------+--------+----------| 2026-03-28 01:31:36.651105 | orchestrator | +------+--------+----------+ 2026-03-28 01:31:37.025744 | orchestrator | + osism manage compute list testbed-node-5 2026-03-28 01:31:38.680769 | orchestrator | 2026-03-28 01:31:38 | ERROR  | Unable to get ansible vault password 2026-03-28 01:31:38.680901 | orchestrator | 2026-03-28 01:31:38 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-28 01:31:38.680921 | orchestrator | 2026-03-28 01:31:38 | ERROR  | Dropping encrypted entries 2026-03-28 01:31:39.935762 | orchestrator | +------+--------+----------+ 2026-03-28 01:31:39.935870 | orchestrator | | ID | Name | Status | 2026-03-28 01:31:39.935882 | orchestrator | |------+--------+----------| 2026-03-28 01:31:39.935892 | orchestrator | +------+--------+----------+ 2026-03-28 01:31:40.334711 | orchestrator | + server_ping 2026-03-28 01:31:40.335893 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-03-28 01:31:40.335939 | orchestrator | ++ tr -d '\r' 2026-03-28 01:31:43.290857 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-28 01:31:43.290985 | orchestrator | + ping -c3 192.168.112.106 2026-03-28 01:31:43.299747 | orchestrator | PING 192.168.112.106 (192.168.112.106) 56(84) bytes of data. 2026-03-28 01:31:43.299868 | orchestrator | 64 bytes from 192.168.112.106: icmp_seq=1 ttl=63 time=6.65 ms 2026-03-28 01:31:44.297126 | orchestrator | 64 bytes from 192.168.112.106: icmp_seq=2 ttl=63 time=1.96 ms 2026-03-28 01:31:45.298948 | orchestrator | 64 bytes from 192.168.112.106: icmp_seq=3 ttl=63 time=1.88 ms 2026-03-28 01:31:45.299053 | orchestrator | 2026-03-28 01:31:45.299067 | orchestrator | --- 192.168.112.106 ping statistics --- 2026-03-28 01:31:45.299079 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-28 01:31:45.299090 | orchestrator | rtt min/avg/max/mdev = 1.882/3.497/6.654/2.232 ms 2026-03-28 01:31:45.299600 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-28 01:31:45.299626 | orchestrator | + ping -c3 192.168.112.196 2026-03-28 01:31:45.313144 | orchestrator | PING 192.168.112.196 (192.168.112.196) 56(84) bytes of data. 2026-03-28 01:31:45.313235 | orchestrator | 64 bytes from 192.168.112.196: icmp_seq=1 ttl=63 time=7.89 ms 2026-03-28 01:31:46.309610 | orchestrator | 64 bytes from 192.168.112.196: icmp_seq=2 ttl=63 time=2.64 ms 2026-03-28 01:31:47.310351 | orchestrator | 64 bytes from 192.168.112.196: icmp_seq=3 ttl=63 time=1.94 ms 2026-03-28 01:31:47.310561 | orchestrator | 2026-03-28 01:31:47.310584 | orchestrator | --- 192.168.112.196 ping statistics --- 2026-03-28 01:31:47.310598 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-28 01:31:47.310622 | orchestrator | rtt min/avg/max/mdev = 1.935/4.155/7.894/2.659 ms 2026-03-28 01:31:47.310642 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-28 01:31:47.310651 | orchestrator | + ping -c3 192.168.112.108 2026-03-28 01:31:47.320561 | orchestrator | PING 192.168.112.108 (192.168.112.108) 56(84) bytes of data. 2026-03-28 01:31:47.320639 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=1 ttl=63 time=6.16 ms 2026-03-28 01:31:48.318248 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=2 ttl=63 time=2.30 ms 2026-03-28 01:31:49.319763 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=3 ttl=63 time=1.83 ms 2026-03-28 01:31:49.319876 | orchestrator | 2026-03-28 01:31:49.319892 | orchestrator | --- 192.168.112.108 ping statistics --- 2026-03-28 01:31:49.319904 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-28 01:31:49.319915 | orchestrator | rtt min/avg/max/mdev = 1.831/3.429/6.156/1.937 ms 2026-03-28 01:31:49.319926 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-28 01:31:49.319936 | orchestrator | + ping -c3 192.168.112.116 2026-03-28 01:31:49.329832 | orchestrator | PING 192.168.112.116 (192.168.112.116) 56(84) bytes of data. 2026-03-28 01:31:49.329941 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=1 ttl=63 time=5.06 ms 2026-03-28 01:31:50.328002 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=2 ttl=63 time=1.95 ms 2026-03-28 01:31:51.329682 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=3 ttl=63 time=1.65 ms 2026-03-28 01:31:51.329770 | orchestrator | 2026-03-28 01:31:51.329778 | orchestrator | --- 192.168.112.116 ping statistics --- 2026-03-28 01:31:51.329785 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-03-28 01:31:51.329791 | orchestrator | rtt min/avg/max/mdev = 1.646/2.886/5.064/1.545 ms 2026-03-28 01:31:51.330241 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-28 01:31:51.330253 | orchestrator | + ping -c3 192.168.112.151 2026-03-28 01:31:51.337350 | orchestrator | PING 192.168.112.151 (192.168.112.151) 56(84) bytes of data. 2026-03-28 01:31:51.337421 | orchestrator | 64 bytes from 192.168.112.151: icmp_seq=1 ttl=63 time=4.29 ms 2026-03-28 01:31:52.336802 | orchestrator | 64 bytes from 192.168.112.151: icmp_seq=2 ttl=63 time=2.50 ms 2026-03-28 01:31:53.338191 | orchestrator | 64 bytes from 192.168.112.151: icmp_seq=3 ttl=63 time=1.92 ms 2026-03-28 01:31:53.338301 | orchestrator | 2026-03-28 01:31:53.338317 | orchestrator | --- 192.168.112.151 ping statistics --- 2026-03-28 01:31:53.338331 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-28 01:31:53.338343 | orchestrator | rtt min/avg/max/mdev = 1.918/2.903/4.294/1.011 ms 2026-03-28 01:31:53.338740 | orchestrator | + osism manage compute migrate --yes --target testbed-node-4 testbed-node-3 2026-03-28 01:31:55.080861 | orchestrator | 2026-03-28 01:31:55 | ERROR  | Unable to get ansible vault password 2026-03-28 01:31:55.080970 | orchestrator | 2026-03-28 01:31:55 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-28 01:31:55.080987 | orchestrator | 2026-03-28 01:31:55 | ERROR  | Dropping encrypted entries 2026-03-28 01:31:57.039979 | orchestrator | 2026-03-28 01:31:57 | INFO  | Live migrating server 46614906-bcda-4f8c-8e5d-c11b62623981 2026-03-28 01:32:09.395693 | orchestrator | 2026-03-28 01:32:09 | INFO  | Live migration of 46614906-bcda-4f8c-8e5d-c11b62623981 (test-4) is still in progress 2026-03-28 01:32:11.932836 | orchestrator | 2026-03-28 01:32:11 | INFO  | Live migration of 46614906-bcda-4f8c-8e5d-c11b62623981 (test-4) is still in progress 2026-03-28 01:32:14.695155 | orchestrator | 2026-03-28 01:32:14 | INFO  | Live migration of 46614906-bcda-4f8c-8e5d-c11b62623981 (test-4) is still in progress 2026-03-28 01:32:17.128730 | orchestrator | 2026-03-28 01:32:17 | INFO  | Live migration of 46614906-bcda-4f8c-8e5d-c11b62623981 (test-4) is still in progress 2026-03-28 01:32:19.548031 | orchestrator | 2026-03-28 01:32:19 | INFO  | Live migration of 46614906-bcda-4f8c-8e5d-c11b62623981 (test-4) is still in progress 2026-03-28 01:32:22.060906 | orchestrator | 2026-03-28 01:32:22 | INFO  | Live migration of 46614906-bcda-4f8c-8e5d-c11b62623981 (test-4) is still in progress 2026-03-28 01:32:24.604020 | orchestrator | 2026-03-28 01:32:24 | INFO  | Live migration of 46614906-bcda-4f8c-8e5d-c11b62623981 (test-4) is still in progress 2026-03-28 01:32:26.918957 | orchestrator | 2026-03-28 01:32:26 | INFO  | Live migration of 46614906-bcda-4f8c-8e5d-c11b62623981 (test-4) is still in progress 2026-03-28 01:32:29.306100 | orchestrator | 2026-03-28 01:32:29 | INFO  | Live migration of 46614906-bcda-4f8c-8e5d-c11b62623981 (test-4) completed with status ACTIVE 2026-03-28 01:32:29.306208 | orchestrator | 2026-03-28 01:32:29 | INFO  | Live migrating server 9055edaf-91e2-4e7e-a5d0-5eaa1516d028 2026-03-28 01:32:40.410385 | orchestrator | 2026-03-28 01:32:40 | INFO  | Live migration of 9055edaf-91e2-4e7e-a5d0-5eaa1516d028 (test-3) is still in progress 2026-03-28 01:32:42.795366 | orchestrator | 2026-03-28 01:32:42 | INFO  | Live migration of 9055edaf-91e2-4e7e-a5d0-5eaa1516d028 (test-3) is still in progress 2026-03-28 01:32:45.161047 | orchestrator | 2026-03-28 01:32:45 | INFO  | Live migration of 9055edaf-91e2-4e7e-a5d0-5eaa1516d028 (test-3) is still in progress 2026-03-28 01:32:47.523714 | orchestrator | 2026-03-28 01:32:47 | INFO  | Live migration of 9055edaf-91e2-4e7e-a5d0-5eaa1516d028 (test-3) is still in progress 2026-03-28 01:32:49.855145 | orchestrator | 2026-03-28 01:32:49 | INFO  | Live migration of 9055edaf-91e2-4e7e-a5d0-5eaa1516d028 (test-3) is still in progress 2026-03-28 01:32:52.159382 | orchestrator | 2026-03-28 01:32:52 | INFO  | Live migration of 9055edaf-91e2-4e7e-a5d0-5eaa1516d028 (test-3) is still in progress 2026-03-28 01:32:54.451147 | orchestrator | 2026-03-28 01:32:54 | INFO  | Live migration of 9055edaf-91e2-4e7e-a5d0-5eaa1516d028 (test-3) is still in progress 2026-03-28 01:32:56.817338 | orchestrator | 2026-03-28 01:32:56 | INFO  | Live migration of 9055edaf-91e2-4e7e-a5d0-5eaa1516d028 (test-3) is still in progress 2026-03-28 01:32:59.151696 | orchestrator | 2026-03-28 01:32:59 | INFO  | Live migration of 9055edaf-91e2-4e7e-a5d0-5eaa1516d028 (test-3) is still in progress 2026-03-28 01:33:01.528078 | orchestrator | 2026-03-28 01:33:01 | INFO  | Live migration of 9055edaf-91e2-4e7e-a5d0-5eaa1516d028 (test-3) completed with status ACTIVE 2026-03-28 01:33:01.528171 | orchestrator | 2026-03-28 01:33:01 | INFO  | Live migrating server 9c067b53-5604-437f-b0b4-63a21d56ddf6 2026-03-28 01:33:14.411717 | orchestrator | 2026-03-28 01:33:14 | INFO  | Live migration of 9c067b53-5604-437f-b0b4-63a21d56ddf6 (test-2) is still in progress 2026-03-28 01:33:16.812534 | orchestrator | 2026-03-28 01:33:16 | INFO  | Live migration of 9c067b53-5604-437f-b0b4-63a21d56ddf6 (test-2) is still in progress 2026-03-28 01:33:19.191682 | orchestrator | 2026-03-28 01:33:19 | INFO  | Live migration of 9c067b53-5604-437f-b0b4-63a21d56ddf6 (test-2) is still in progress 2026-03-28 01:33:21.552022 | orchestrator | 2026-03-28 01:33:21 | INFO  | Live migration of 9c067b53-5604-437f-b0b4-63a21d56ddf6 (test-2) is still in progress 2026-03-28 01:33:23.844725 | orchestrator | 2026-03-28 01:33:23 | INFO  | Live migration of 9c067b53-5604-437f-b0b4-63a21d56ddf6 (test-2) is still in progress 2026-03-28 01:33:26.177885 | orchestrator | 2026-03-28 01:33:26 | INFO  | Live migration of 9c067b53-5604-437f-b0b4-63a21d56ddf6 (test-2) is still in progress 2026-03-28 01:33:28.483153 | orchestrator | 2026-03-28 01:33:28 | INFO  | Live migration of 9c067b53-5604-437f-b0b4-63a21d56ddf6 (test-2) is still in progress 2026-03-28 01:33:30.756677 | orchestrator | 2026-03-28 01:33:30 | INFO  | Live migration of 9c067b53-5604-437f-b0b4-63a21d56ddf6 (test-2) is still in progress 2026-03-28 01:33:33.073056 | orchestrator | 2026-03-28 01:33:33 | INFO  | Live migration of 9c067b53-5604-437f-b0b4-63a21d56ddf6 (test-2) completed with status ACTIVE 2026-03-28 01:33:33.073192 | orchestrator | 2026-03-28 01:33:33 | INFO  | Live migrating server a77a85b1-f465-404c-94d1-e65d9b71e4d3 2026-03-28 01:33:45.681938 | orchestrator | 2026-03-28 01:33:45 | INFO  | Live migration of a77a85b1-f465-404c-94d1-e65d9b71e4d3 (test) is still in progress 2026-03-28 01:33:48.055665 | orchestrator | 2026-03-28 01:33:48 | INFO  | Live migration of a77a85b1-f465-404c-94d1-e65d9b71e4d3 (test) is still in progress 2026-03-28 01:33:50.464117 | orchestrator | 2026-03-28 01:33:50 | INFO  | Live migration of a77a85b1-f465-404c-94d1-e65d9b71e4d3 (test) is still in progress 2026-03-28 01:33:52.838712 | orchestrator | 2026-03-28 01:33:52 | INFO  | Live migration of a77a85b1-f465-404c-94d1-e65d9b71e4d3 (test) is still in progress 2026-03-28 01:33:55.191554 | orchestrator | 2026-03-28 01:33:55 | INFO  | Live migration of a77a85b1-f465-404c-94d1-e65d9b71e4d3 (test) is still in progress 2026-03-28 01:33:57.570322 | orchestrator | 2026-03-28 01:33:57 | INFO  | Live migration of a77a85b1-f465-404c-94d1-e65d9b71e4d3 (test) is still in progress 2026-03-28 01:33:59.904228 | orchestrator | 2026-03-28 01:33:59 | INFO  | Live migration of a77a85b1-f465-404c-94d1-e65d9b71e4d3 (test) is still in progress 2026-03-28 01:34:02.200975 | orchestrator | 2026-03-28 01:34:02 | INFO  | Live migration of a77a85b1-f465-404c-94d1-e65d9b71e4d3 (test) is still in progress 2026-03-28 01:34:04.594921 | orchestrator | 2026-03-28 01:34:04 | INFO  | Live migration of a77a85b1-f465-404c-94d1-e65d9b71e4d3 (test) is still in progress 2026-03-28 01:34:06.963386 | orchestrator | 2026-03-28 01:34:06 | INFO  | Live migration of a77a85b1-f465-404c-94d1-e65d9b71e4d3 (test) is still in progress 2026-03-28 01:34:09.349195 | orchestrator | 2026-03-28 01:34:09 | INFO  | Live migration of a77a85b1-f465-404c-94d1-e65d9b71e4d3 (test) completed with status ACTIVE 2026-03-28 01:34:09.349294 | orchestrator | 2026-03-28 01:34:09 | INFO  | Live migrating server ba83cc16-8ed3-42ff-965f-e177e7c4c4bf 2026-03-28 01:34:19.436214 | orchestrator | 2026-03-28 01:34:19 | INFO  | Live migration of ba83cc16-8ed3-42ff-965f-e177e7c4c4bf (test-1) is still in progress 2026-03-28 01:34:21.804593 | orchestrator | 2026-03-28 01:34:21 | INFO  | Live migration of ba83cc16-8ed3-42ff-965f-e177e7c4c4bf (test-1) is still in progress 2026-03-28 01:34:24.148605 | orchestrator | 2026-03-28 01:34:24 | INFO  | Live migration of ba83cc16-8ed3-42ff-965f-e177e7c4c4bf (test-1) is still in progress 2026-03-28 01:34:26.465720 | orchestrator | 2026-03-28 01:34:26 | INFO  | Live migration of ba83cc16-8ed3-42ff-965f-e177e7c4c4bf (test-1) is still in progress 2026-03-28 01:34:28.815872 | orchestrator | 2026-03-28 01:34:28 | INFO  | Live migration of ba83cc16-8ed3-42ff-965f-e177e7c4c4bf (test-1) is still in progress 2026-03-28 01:34:31.172716 | orchestrator | 2026-03-28 01:34:31 | INFO  | Live migration of ba83cc16-8ed3-42ff-965f-e177e7c4c4bf (test-1) is still in progress 2026-03-28 01:34:33.497647 | orchestrator | 2026-03-28 01:34:33 | INFO  | Live migration of ba83cc16-8ed3-42ff-965f-e177e7c4c4bf (test-1) is still in progress 2026-03-28 01:34:35.855146 | orchestrator | 2026-03-28 01:34:35 | INFO  | Live migration of ba83cc16-8ed3-42ff-965f-e177e7c4c4bf (test-1) is still in progress 2026-03-28 01:34:38.219311 | orchestrator | 2026-03-28 01:34:38 | INFO  | Live migration of ba83cc16-8ed3-42ff-965f-e177e7c4c4bf (test-1) completed with status ACTIVE 2026-03-28 01:34:38.594165 | orchestrator | + compute_list 2026-03-28 01:34:38.594233 | orchestrator | + osism manage compute list testbed-node-3 2026-03-28 01:34:40.321616 | orchestrator | 2026-03-28 01:34:40 | ERROR  | Unable to get ansible vault password 2026-03-28 01:34:40.321710 | orchestrator | 2026-03-28 01:34:40 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-28 01:34:40.321724 | orchestrator | 2026-03-28 01:34:40 | ERROR  | Dropping encrypted entries 2026-03-28 01:34:41.651854 | orchestrator | +------+--------+----------+ 2026-03-28 01:34:41.651988 | orchestrator | | ID | Name | Status | 2026-03-28 01:34:41.652014 | orchestrator | |------+--------+----------| 2026-03-28 01:34:41.652032 | orchestrator | +------+--------+----------+ 2026-03-28 01:34:42.003542 | orchestrator | + osism manage compute list testbed-node-4 2026-03-28 01:34:43.760236 | orchestrator | 2026-03-28 01:34:43 | ERROR  | Unable to get ansible vault password 2026-03-28 01:34:43.760337 | orchestrator | 2026-03-28 01:34:43 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-28 01:34:43.760353 | orchestrator | 2026-03-28 01:34:43 | ERROR  | Dropping encrypted entries 2026-03-28 01:34:45.425753 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-28 01:34:45.425867 | orchestrator | | ID | Name | Status | 2026-03-28 01:34:45.425879 | orchestrator | |--------------------------------------+--------+----------| 2026-03-28 01:34:45.425888 | orchestrator | | 46614906-bcda-4f8c-8e5d-c11b62623981 | test-4 | ACTIVE | 2026-03-28 01:34:45.425895 | orchestrator | | 9055edaf-91e2-4e7e-a5d0-5eaa1516d028 | test-3 | ACTIVE | 2026-03-28 01:34:45.425903 | orchestrator | | 9c067b53-5604-437f-b0b4-63a21d56ddf6 | test-2 | ACTIVE | 2026-03-28 01:34:45.425910 | orchestrator | | a77a85b1-f465-404c-94d1-e65d9b71e4d3 | test | ACTIVE | 2026-03-28 01:34:45.425918 | orchestrator | | ba83cc16-8ed3-42ff-965f-e177e7c4c4bf | test-1 | ACTIVE | 2026-03-28 01:34:45.425935 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-28 01:34:45.783546 | orchestrator | + osism manage compute list testbed-node-5 2026-03-28 01:34:47.565713 | orchestrator | 2026-03-28 01:34:47 | ERROR  | Unable to get ansible vault password 2026-03-28 01:34:47.565811 | orchestrator | 2026-03-28 01:34:47 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-28 01:34:47.565826 | orchestrator | 2026-03-28 01:34:47 | ERROR  | Dropping encrypted entries 2026-03-28 01:34:48.745046 | orchestrator | +------+--------+----------+ 2026-03-28 01:34:48.745184 | orchestrator | | ID | Name | Status | 2026-03-28 01:34:48.745212 | orchestrator | |------+--------+----------| 2026-03-28 01:34:48.745232 | orchestrator | +------+--------+----------+ 2026-03-28 01:34:49.128731 | orchestrator | + server_ping 2026-03-28 01:34:49.129791 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-03-28 01:34:49.130663 | orchestrator | ++ tr -d '\r' 2026-03-28 01:34:52.251930 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-28 01:34:52.252081 | orchestrator | + ping -c3 192.168.112.106 2026-03-28 01:34:52.266172 | orchestrator | PING 192.168.112.106 (192.168.112.106) 56(84) bytes of data. 2026-03-28 01:34:52.266294 | orchestrator | 64 bytes from 192.168.112.106: icmp_seq=1 ttl=63 time=9.40 ms 2026-03-28 01:34:53.260579 | orchestrator | 64 bytes from 192.168.112.106: icmp_seq=2 ttl=63 time=2.12 ms 2026-03-28 01:34:54.262253 | orchestrator | 64 bytes from 192.168.112.106: icmp_seq=3 ttl=63 time=2.11 ms 2026-03-28 01:34:54.262388 | orchestrator | 2026-03-28 01:34:54.262463 | orchestrator | --- 192.168.112.106 ping statistics --- 2026-03-28 01:34:54.262479 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-28 01:34:54.262515 | orchestrator | rtt min/avg/max/mdev = 2.106/4.540/9.398/3.434 ms 2026-03-28 01:34:54.263613 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-28 01:34:54.263706 | orchestrator | + ping -c3 192.168.112.196 2026-03-28 01:34:54.277562 | orchestrator | PING 192.168.112.196 (192.168.112.196) 56(84) bytes of data. 2026-03-28 01:34:54.277683 | orchestrator | 64 bytes from 192.168.112.196: icmp_seq=1 ttl=63 time=8.96 ms 2026-03-28 01:34:55.273521 | orchestrator | 64 bytes from 192.168.112.196: icmp_seq=2 ttl=63 time=2.83 ms 2026-03-28 01:34:56.273322 | orchestrator | 64 bytes from 192.168.112.196: icmp_seq=3 ttl=63 time=1.89 ms 2026-03-28 01:34:56.273511 | orchestrator | 2026-03-28 01:34:56.273544 | orchestrator | --- 192.168.112.196 ping statistics --- 2026-03-28 01:34:56.273568 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-28 01:34:56.273587 | orchestrator | rtt min/avg/max/mdev = 1.890/4.560/8.958/3.133 ms 2026-03-28 01:34:56.274073 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-28 01:34:56.274145 | orchestrator | + ping -c3 192.168.112.108 2026-03-28 01:34:56.284510 | orchestrator | PING 192.168.112.108 (192.168.112.108) 56(84) bytes of data. 2026-03-28 01:34:56.284581 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=1 ttl=63 time=7.41 ms 2026-03-28 01:34:57.281662 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=2 ttl=63 time=2.57 ms 2026-03-28 01:34:58.283568 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=3 ttl=63 time=1.94 ms 2026-03-28 01:34:58.283803 | orchestrator | 2026-03-28 01:34:58.283835 | orchestrator | --- 192.168.112.108 ping statistics --- 2026-03-28 01:34:58.283854 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-03-28 01:34:58.283867 | orchestrator | rtt min/avg/max/mdev = 1.944/3.972/7.405/2.440 ms 2026-03-28 01:34:58.283889 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-28 01:34:58.283994 | orchestrator | + ping -c3 192.168.112.116 2026-03-28 01:34:58.297091 | orchestrator | PING 192.168.112.116 (192.168.112.116) 56(84) bytes of data. 2026-03-28 01:34:58.297166 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=1 ttl=63 time=7.79 ms 2026-03-28 01:34:59.293028 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=2 ttl=63 time=2.70 ms 2026-03-28 01:35:00.294835 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=3 ttl=63 time=2.05 ms 2026-03-28 01:35:00.294944 | orchestrator | 2026-03-28 01:35:00.294962 | orchestrator | --- 192.168.112.116 ping statistics --- 2026-03-28 01:35:00.295008 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-28 01:35:00.295020 | orchestrator | rtt min/avg/max/mdev = 2.052/4.183/7.794/2.567 ms 2026-03-28 01:35:00.295035 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-28 01:35:00.295055 | orchestrator | + ping -c3 192.168.112.151 2026-03-28 01:35:00.304567 | orchestrator | PING 192.168.112.151 (192.168.112.151) 56(84) bytes of data. 2026-03-28 01:35:00.304654 | orchestrator | 64 bytes from 192.168.112.151: icmp_seq=1 ttl=63 time=6.41 ms 2026-03-28 01:35:01.302817 | orchestrator | 64 bytes from 192.168.112.151: icmp_seq=2 ttl=63 time=2.67 ms 2026-03-28 01:35:02.303862 | orchestrator | 64 bytes from 192.168.112.151: icmp_seq=3 ttl=63 time=1.87 ms 2026-03-28 01:35:02.303969 | orchestrator | 2026-03-28 01:35:02.303983 | orchestrator | --- 192.168.112.151 ping statistics --- 2026-03-28 01:35:02.303996 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-03-28 01:35:02.304006 | orchestrator | rtt min/avg/max/mdev = 1.868/3.646/6.405/1.977 ms 2026-03-28 01:35:02.305486 | orchestrator | + osism manage compute migrate --yes --target testbed-node-5 testbed-node-4 2026-03-28 01:35:04.028937 | orchestrator | 2026-03-28 01:35:04 | ERROR  | Unable to get ansible vault password 2026-03-28 01:35:04.029121 | orchestrator | 2026-03-28 01:35:04 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-28 01:35:04.029151 | orchestrator | 2026-03-28 01:35:04 | ERROR  | Dropping encrypted entries 2026-03-28 01:35:05.750088 | orchestrator | 2026-03-28 01:35:05 | INFO  | Live migrating server 46614906-bcda-4f8c-8e5d-c11b62623981 2026-03-28 01:35:19.146833 | orchestrator | 2026-03-28 01:35:19 | INFO  | Live migration of 46614906-bcda-4f8c-8e5d-c11b62623981 (test-4) is still in progress 2026-03-28 01:35:21.549682 | orchestrator | 2026-03-28 01:35:21 | INFO  | Live migration of 46614906-bcda-4f8c-8e5d-c11b62623981 (test-4) is still in progress 2026-03-28 01:35:23.928937 | orchestrator | 2026-03-28 01:35:23 | INFO  | Live migration of 46614906-bcda-4f8c-8e5d-c11b62623981 (test-4) is still in progress 2026-03-28 01:35:26.339705 | orchestrator | 2026-03-28 01:35:26 | INFO  | Live migration of 46614906-bcda-4f8c-8e5d-c11b62623981 (test-4) is still in progress 2026-03-28 01:35:28.675544 | orchestrator | 2026-03-28 01:35:28 | INFO  | Live migration of 46614906-bcda-4f8c-8e5d-c11b62623981 (test-4) is still in progress 2026-03-28 01:35:30.994337 | orchestrator | 2026-03-28 01:35:30 | INFO  | Live migration of 46614906-bcda-4f8c-8e5d-c11b62623981 (test-4) is still in progress 2026-03-28 01:35:33.371470 | orchestrator | 2026-03-28 01:35:33 | INFO  | Live migration of 46614906-bcda-4f8c-8e5d-c11b62623981 (test-4) is still in progress 2026-03-28 01:35:35.672340 | orchestrator | 2026-03-28 01:35:35 | INFO  | Live migration of 46614906-bcda-4f8c-8e5d-c11b62623981 (test-4) is still in progress 2026-03-28 01:35:37.998503 | orchestrator | 2026-03-28 01:35:38 | INFO  | Live migration of 46614906-bcda-4f8c-8e5d-c11b62623981 (test-4) is still in progress 2026-03-28 01:35:40.383549 | orchestrator | 2026-03-28 01:35:40 | INFO  | Live migration of 46614906-bcda-4f8c-8e5d-c11b62623981 (test-4) is still in progress 2026-03-28 01:35:42.658785 | orchestrator | 2026-03-28 01:35:42 | INFO  | Live migration of 46614906-bcda-4f8c-8e5d-c11b62623981 (test-4) is still in progress 2026-03-28 01:35:44.956115 | orchestrator | 2026-03-28 01:35:44 | INFO  | Live migration of 46614906-bcda-4f8c-8e5d-c11b62623981 (test-4) completed with status ACTIVE 2026-03-28 01:35:44.956225 | orchestrator | 2026-03-28 01:35:44 | INFO  | Live migrating server 9055edaf-91e2-4e7e-a5d0-5eaa1516d028 2026-03-28 01:35:55.415135 | orchestrator | 2026-03-28 01:35:55 | INFO  | Live migration of 9055edaf-91e2-4e7e-a5d0-5eaa1516d028 (test-3) is still in progress 2026-03-28 01:35:57.776481 | orchestrator | 2026-03-28 01:35:57 | INFO  | Live migration of 9055edaf-91e2-4e7e-a5d0-5eaa1516d028 (test-3) is still in progress 2026-03-28 01:36:00.199860 | orchestrator | 2026-03-28 01:36:00 | INFO  | Live migration of 9055edaf-91e2-4e7e-a5d0-5eaa1516d028 (test-3) is still in progress 2026-03-28 01:36:02.539945 | orchestrator | 2026-03-28 01:36:02 | INFO  | Live migration of 9055edaf-91e2-4e7e-a5d0-5eaa1516d028 (test-3) is still in progress 2026-03-28 01:36:04.869472 | orchestrator | 2026-03-28 01:36:04 | INFO  | Live migration of 9055edaf-91e2-4e7e-a5d0-5eaa1516d028 (test-3) is still in progress 2026-03-28 01:36:07.257208 | orchestrator | 2026-03-28 01:36:07 | INFO  | Live migration of 9055edaf-91e2-4e7e-a5d0-5eaa1516d028 (test-3) is still in progress 2026-03-28 01:36:09.637578 | orchestrator | 2026-03-28 01:36:09 | INFO  | Live migration of 9055edaf-91e2-4e7e-a5d0-5eaa1516d028 (test-3) is still in progress 2026-03-28 01:36:11.919055 | orchestrator | 2026-03-28 01:36:11 | INFO  | Live migration of 9055edaf-91e2-4e7e-a5d0-5eaa1516d028 (test-3) is still in progress 2026-03-28 01:36:14.217811 | orchestrator | 2026-03-28 01:36:14 | INFO  | Live migration of 9055edaf-91e2-4e7e-a5d0-5eaa1516d028 (test-3) completed with status ACTIVE 2026-03-28 01:36:14.217902 | orchestrator | 2026-03-28 01:36:14 | INFO  | Live migrating server 9c067b53-5604-437f-b0b4-63a21d56ddf6 2026-03-28 01:36:26.059439 | orchestrator | 2026-03-28 01:36:26 | INFO  | Live migration of 9c067b53-5604-437f-b0b4-63a21d56ddf6 (test-2) is still in progress 2026-03-28 01:36:28.402693 | orchestrator | 2026-03-28 01:36:28 | INFO  | Live migration of 9c067b53-5604-437f-b0b4-63a21d56ddf6 (test-2) is still in progress 2026-03-28 01:36:30.734050 | orchestrator | 2026-03-28 01:36:30 | INFO  | Live migration of 9c067b53-5604-437f-b0b4-63a21d56ddf6 (test-2) is still in progress 2026-03-28 01:36:33.088257 | orchestrator | 2026-03-28 01:36:33 | INFO  | Live migration of 9c067b53-5604-437f-b0b4-63a21d56ddf6 (test-2) is still in progress 2026-03-28 01:36:35.411219 | orchestrator | 2026-03-28 01:36:35 | INFO  | Live migration of 9c067b53-5604-437f-b0b4-63a21d56ddf6 (test-2) is still in progress 2026-03-28 01:36:37.758876 | orchestrator | 2026-03-28 01:36:37 | INFO  | Live migration of 9c067b53-5604-437f-b0b4-63a21d56ddf6 (test-2) is still in progress 2026-03-28 01:36:40.152966 | orchestrator | 2026-03-28 01:36:40 | INFO  | Live migration of 9c067b53-5604-437f-b0b4-63a21d56ddf6 (test-2) is still in progress 2026-03-28 01:36:42.514549 | orchestrator | 2026-03-28 01:36:42 | INFO  | Live migration of 9c067b53-5604-437f-b0b4-63a21d56ddf6 (test-2) is still in progress 2026-03-28 01:36:44.915690 | orchestrator | 2026-03-28 01:36:44 | INFO  | Live migration of 9c067b53-5604-437f-b0b4-63a21d56ddf6 (test-2) completed with status ACTIVE 2026-03-28 01:36:44.915780 | orchestrator | 2026-03-28 01:36:44 | INFO  | Live migrating server a77a85b1-f465-404c-94d1-e65d9b71e4d3 2026-03-28 01:36:55.249184 | orchestrator | 2026-03-28 01:36:55 | INFO  | Live migration of a77a85b1-f465-404c-94d1-e65d9b71e4d3 (test) is still in progress 2026-03-28 01:36:57.659906 | orchestrator | 2026-03-28 01:36:57 | INFO  | Live migration of a77a85b1-f465-404c-94d1-e65d9b71e4d3 (test) is still in progress 2026-03-28 01:37:00.018228 | orchestrator | 2026-03-28 01:37:00 | INFO  | Live migration of a77a85b1-f465-404c-94d1-e65d9b71e4d3 (test) is still in progress 2026-03-28 01:37:02.401241 | orchestrator | 2026-03-28 01:37:02 | INFO  | Live migration of a77a85b1-f465-404c-94d1-e65d9b71e4d3 (test) is still in progress 2026-03-28 01:37:04.738891 | orchestrator | 2026-03-28 01:37:04 | INFO  | Live migration of a77a85b1-f465-404c-94d1-e65d9b71e4d3 (test) is still in progress 2026-03-28 01:37:07.040549 | orchestrator | 2026-03-28 01:37:07 | INFO  | Live migration of a77a85b1-f465-404c-94d1-e65d9b71e4d3 (test) is still in progress 2026-03-28 01:37:09.415282 | orchestrator | 2026-03-28 01:37:09 | INFO  | Live migration of a77a85b1-f465-404c-94d1-e65d9b71e4d3 (test) is still in progress 2026-03-28 01:37:11.911205 | orchestrator | 2026-03-28 01:37:11 | INFO  | Live migration of a77a85b1-f465-404c-94d1-e65d9b71e4d3 (test) is still in progress 2026-03-28 01:37:14.278091 | orchestrator | 2026-03-28 01:37:14 | INFO  | Live migration of a77a85b1-f465-404c-94d1-e65d9b71e4d3 (test) is still in progress 2026-03-28 01:37:16.650670 | orchestrator | 2026-03-28 01:37:16 | INFO  | Live migration of a77a85b1-f465-404c-94d1-e65d9b71e4d3 (test) is still in progress 2026-03-28 01:37:18.968429 | orchestrator | 2026-03-28 01:37:18 | INFO  | Live migration of a77a85b1-f465-404c-94d1-e65d9b71e4d3 (test) completed with status ACTIVE 2026-03-28 01:37:18.968535 | orchestrator | 2026-03-28 01:37:18 | INFO  | Live migrating server ba83cc16-8ed3-42ff-965f-e177e7c4c4bf 2026-03-28 01:37:29.520975 | orchestrator | 2026-03-28 01:37:29 | INFO  | Live migration of ba83cc16-8ed3-42ff-965f-e177e7c4c4bf (test-1) is still in progress 2026-03-28 01:37:31.952464 | orchestrator | 2026-03-28 01:37:31 | INFO  | Live migration of ba83cc16-8ed3-42ff-965f-e177e7c4c4bf (test-1) is still in progress 2026-03-28 01:37:34.438189 | orchestrator | 2026-03-28 01:37:34 | INFO  | Live migration of ba83cc16-8ed3-42ff-965f-e177e7c4c4bf (test-1) is still in progress 2026-03-28 01:37:36.983666 | orchestrator | 2026-03-28 01:37:36 | INFO  | Live migration of ba83cc16-8ed3-42ff-965f-e177e7c4c4bf (test-1) is still in progress 2026-03-28 01:37:39.284181 | orchestrator | 2026-03-28 01:37:39 | INFO  | Live migration of ba83cc16-8ed3-42ff-965f-e177e7c4c4bf (test-1) is still in progress 2026-03-28 01:37:41.744758 | orchestrator | 2026-03-28 01:37:41 | INFO  | Live migration of ba83cc16-8ed3-42ff-965f-e177e7c4c4bf (test-1) is still in progress 2026-03-28 01:37:44.054497 | orchestrator | 2026-03-28 01:37:44 | INFO  | Live migration of ba83cc16-8ed3-42ff-965f-e177e7c4c4bf (test-1) is still in progress 2026-03-28 01:37:46.428761 | orchestrator | 2026-03-28 01:37:46 | INFO  | Live migration of ba83cc16-8ed3-42ff-965f-e177e7c4c4bf (test-1) is still in progress 2026-03-28 01:37:48.767653 | orchestrator | 2026-03-28 01:37:48 | INFO  | Live migration of ba83cc16-8ed3-42ff-965f-e177e7c4c4bf (test-1) completed with status ACTIVE 2026-03-28 01:37:49.239988 | orchestrator | + compute_list 2026-03-28 01:37:49.240133 | orchestrator | + osism manage compute list testbed-node-3 2026-03-28 01:37:51.195088 | orchestrator | 2026-03-28 01:37:51 | ERROR  | Unable to get ansible vault password 2026-03-28 01:37:51.195217 | orchestrator | 2026-03-28 01:37:51 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-28 01:37:51.195229 | orchestrator | 2026-03-28 01:37:51 | ERROR  | Dropping encrypted entries 2026-03-28 01:37:52.453491 | orchestrator | +------+--------+----------+ 2026-03-28 01:37:52.453631 | orchestrator | | ID | Name | Status | 2026-03-28 01:37:52.453654 | orchestrator | |------+--------+----------| 2026-03-28 01:37:52.453670 | orchestrator | +------+--------+----------+ 2026-03-28 01:37:52.844512 | orchestrator | + osism manage compute list testbed-node-4 2026-03-28 01:37:54.739475 | orchestrator | 2026-03-28 01:37:54 | ERROR  | Unable to get ansible vault password 2026-03-28 01:37:54.739578 | orchestrator | 2026-03-28 01:37:54 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-28 01:37:54.739590 | orchestrator | 2026-03-28 01:37:54 | ERROR  | Dropping encrypted entries 2026-03-28 01:37:55.954545 | orchestrator | +------+--------+----------+ 2026-03-28 01:37:55.954717 | orchestrator | | ID | Name | Status | 2026-03-28 01:37:55.954729 | orchestrator | |------+--------+----------| 2026-03-28 01:37:55.954735 | orchestrator | +------+--------+----------+ 2026-03-28 01:37:56.373994 | orchestrator | + osism manage compute list testbed-node-5 2026-03-28 01:37:58.167237 | orchestrator | 2026-03-28 01:37:58 | ERROR  | Unable to get ansible vault password 2026-03-28 01:37:58.167388 | orchestrator | 2026-03-28 01:37:58 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-28 01:37:58.167408 | orchestrator | 2026-03-28 01:37:58 | ERROR  | Dropping encrypted entries 2026-03-28 01:37:59.993679 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-28 01:37:59.993775 | orchestrator | | ID | Name | Status | 2026-03-28 01:37:59.993786 | orchestrator | |--------------------------------------+--------+----------| 2026-03-28 01:37:59.993795 | orchestrator | | 46614906-bcda-4f8c-8e5d-c11b62623981 | test-4 | ACTIVE | 2026-03-28 01:37:59.993803 | orchestrator | | 9055edaf-91e2-4e7e-a5d0-5eaa1516d028 | test-3 | ACTIVE | 2026-03-28 01:37:59.993826 | orchestrator | | 9c067b53-5604-437f-b0b4-63a21d56ddf6 | test-2 | ACTIVE | 2026-03-28 01:37:59.993834 | orchestrator | | a77a85b1-f465-404c-94d1-e65d9b71e4d3 | test | ACTIVE | 2026-03-28 01:37:59.993842 | orchestrator | | ba83cc16-8ed3-42ff-965f-e177e7c4c4bf | test-1 | ACTIVE | 2026-03-28 01:37:59.993849 | orchestrator | +--------------------------------------+--------+----------+ 2026-03-28 01:38:00.440433 | orchestrator | + server_ping 2026-03-28 01:38:00.441133 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-03-28 01:38:00.441167 | orchestrator | ++ tr -d '\r' 2026-03-28 01:38:03.899710 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-28 01:38:03.899763 | orchestrator | + ping -c3 192.168.112.106 2026-03-28 01:38:03.912315 | orchestrator | PING 192.168.112.106 (192.168.112.106) 56(84) bytes of data. 2026-03-28 01:38:03.912387 | orchestrator | 64 bytes from 192.168.112.106: icmp_seq=1 ttl=63 time=10.3 ms 2026-03-28 01:38:04.906808 | orchestrator | 64 bytes from 192.168.112.106: icmp_seq=2 ttl=63 time=2.99 ms 2026-03-28 01:38:05.907509 | orchestrator | 64 bytes from 192.168.112.106: icmp_seq=3 ttl=63 time=2.33 ms 2026-03-28 01:38:05.907577 | orchestrator | 2026-03-28 01:38:05.907584 | orchestrator | --- 192.168.112.106 ping statistics --- 2026-03-28 01:38:05.907589 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-03-28 01:38:05.907594 | orchestrator | rtt min/avg/max/mdev = 2.331/5.223/10.345/3.631 ms 2026-03-28 01:38:05.907598 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-28 01:38:05.907603 | orchestrator | + ping -c3 192.168.112.196 2026-03-28 01:38:05.919998 | orchestrator | PING 192.168.112.196 (192.168.112.196) 56(84) bytes of data. 2026-03-28 01:38:05.920051 | orchestrator | 64 bytes from 192.168.112.196: icmp_seq=1 ttl=63 time=8.43 ms 2026-03-28 01:38:06.915466 | orchestrator | 64 bytes from 192.168.112.196: icmp_seq=2 ttl=63 time=2.25 ms 2026-03-28 01:38:07.917395 | orchestrator | 64 bytes from 192.168.112.196: icmp_seq=3 ttl=63 time=1.70 ms 2026-03-28 01:38:07.917460 | orchestrator | 2026-03-28 01:38:07.917467 | orchestrator | --- 192.168.112.196 ping statistics --- 2026-03-28 01:38:07.917474 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-28 01:38:07.917479 | orchestrator | rtt min/avg/max/mdev = 1.703/4.129/8.431/3.050 ms 2026-03-28 01:38:07.917484 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-28 01:38:07.917488 | orchestrator | + ping -c3 192.168.112.108 2026-03-28 01:38:07.927686 | orchestrator | PING 192.168.112.108 (192.168.112.108) 56(84) bytes of data. 2026-03-28 01:38:07.927744 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=1 ttl=63 time=5.98 ms 2026-03-28 01:38:08.925165 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=2 ttl=63 time=2.61 ms 2026-03-28 01:38:09.927207 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=3 ttl=63 time=2.08 ms 2026-03-28 01:38:09.927272 | orchestrator | 2026-03-28 01:38:09.927278 | orchestrator | --- 192.168.112.108 ping statistics --- 2026-03-28 01:38:09.927342 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-28 01:38:09.927349 | orchestrator | rtt min/avg/max/mdev = 2.081/3.557/5.983/1.728 ms 2026-03-28 01:38:09.927353 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-28 01:38:09.927358 | orchestrator | + ping -c3 192.168.112.116 2026-03-28 01:38:09.940781 | orchestrator | PING 192.168.112.116 (192.168.112.116) 56(84) bytes of data. 2026-03-28 01:38:09.940854 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=1 ttl=63 time=9.86 ms 2026-03-28 01:38:10.936018 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=2 ttl=63 time=3.67 ms 2026-03-28 01:38:11.936007 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=3 ttl=63 time=1.76 ms 2026-03-28 01:38:11.936094 | orchestrator | 2026-03-28 01:38:11.936109 | orchestrator | --- 192.168.112.116 ping statistics --- 2026-03-28 01:38:11.936121 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-28 01:38:11.936131 | orchestrator | rtt min/avg/max/mdev = 1.761/5.098/9.859/3.455 ms 2026-03-28 01:38:11.936968 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-28 01:38:11.937004 | orchestrator | + ping -c3 192.168.112.151 2026-03-28 01:38:11.949250 | orchestrator | PING 192.168.112.151 (192.168.112.151) 56(84) bytes of data. 2026-03-28 01:38:11.949391 | orchestrator | 64 bytes from 192.168.112.151: icmp_seq=1 ttl=63 time=7.37 ms 2026-03-28 01:38:12.946157 | orchestrator | 64 bytes from 192.168.112.151: icmp_seq=2 ttl=63 time=2.44 ms 2026-03-28 01:38:13.948462 | orchestrator | 64 bytes from 192.168.112.151: icmp_seq=3 ttl=63 time=2.13 ms 2026-03-28 01:38:13.948531 | orchestrator | 2026-03-28 01:38:13.948539 | orchestrator | --- 192.168.112.151 ping statistics --- 2026-03-28 01:38:13.948546 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-28 01:38:13.948552 | orchestrator | rtt min/avg/max/mdev = 2.125/3.976/7.368/2.401 ms 2026-03-28 01:38:14.458650 | orchestrator | ok: Runtime: 0:17:30.002187 2026-03-28 01:38:14.512756 | 2026-03-28 01:38:14.512938 | TASK [Run tempest] 2026-03-28 01:38:15.194679 | orchestrator | 2026-03-28 01:38:15.194793 | orchestrator | # Tempest 2026-03-28 01:38:15.194802 | orchestrator | 2026-03-28 01:38:15.194808 | orchestrator | + set -e 2026-03-28 01:38:15.194814 | orchestrator | + source /opt/manager-vars.sh 2026-03-28 01:38:15.194821 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-28 01:38:15.194828 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-28 01:38:15.194846 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-28 01:38:15.194854 | orchestrator | ++ CEPH_VERSION=reef 2026-03-28 01:38:15.194860 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-28 01:38:15.194866 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-28 01:38:15.194875 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-28 01:38:15.194881 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-28 01:38:15.194885 | orchestrator | ++ export OPENSTACK_VERSION=2025.1 2026-03-28 01:38:15.194891 | orchestrator | ++ OPENSTACK_VERSION=2025.1 2026-03-28 01:38:15.194895 | orchestrator | ++ export ARA=false 2026-03-28 01:38:15.194899 | orchestrator | ++ ARA=false 2026-03-28 01:38:15.194907 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-28 01:38:15.194911 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-28 01:38:15.194915 | orchestrator | ++ export TEMPEST=true 2026-03-28 01:38:15.194921 | orchestrator | ++ TEMPEST=true 2026-03-28 01:38:15.194925 | orchestrator | ++ export IS_ZUUL=true 2026-03-28 01:38:15.194928 | orchestrator | ++ IS_ZUUL=true 2026-03-28 01:38:15.194933 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.109 2026-03-28 01:38:15.194944 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.109 2026-03-28 01:38:15.194948 | orchestrator | ++ export EXTERNAL_API=false 2026-03-28 01:38:15.194952 | orchestrator | ++ EXTERNAL_API=false 2026-03-28 01:38:15.194956 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-28 01:38:15.194960 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-28 01:38:15.194964 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-28 01:38:15.194967 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-28 01:38:15.194971 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-28 01:38:15.194975 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-28 01:38:15.194979 | orchestrator | + echo 2026-03-28 01:38:15.194983 | orchestrator | + echo '# Tempest' 2026-03-28 01:38:15.194987 | orchestrator | + echo 2026-03-28 01:38:15.194991 | orchestrator | + [[ ! -e /opt/tempest ]] 2026-03-28 01:38:15.194995 | orchestrator | + osism apply tempest --skip-tags run-tempest 2026-03-28 01:38:26.899187 | orchestrator | 2026-03-28 01:38:26 | INFO  | Prepare task for execution of tempest. 2026-03-28 01:38:26.977215 | orchestrator | 2026-03-28 01:38:26 | INFO  | Task 1cb7cd74-ab49-4b40-8a52-f16567bb1a2a (tempest) was prepared for execution. 2026-03-28 01:38:26.977302 | orchestrator | 2026-03-28 01:38:26 | INFO  | It takes a moment until task 1cb7cd74-ab49-4b40-8a52-f16567bb1a2a (tempest) has been started and output is visible here. 2026-03-28 01:39:54.284855 | orchestrator | 2026-03-28 01:39:54.284967 | orchestrator | PLAY [Run tempest] ************************************************************* 2026-03-28 01:39:54.284985 | orchestrator | 2026-03-28 01:39:54.285006 | orchestrator | TASK [osism.validations.tempest : Create tempest workdir] ********************** 2026-03-28 01:39:54.285035 | orchestrator | Saturday 28 March 2026 01:38:31 +0000 (0:00:00.385) 0:00:00.385 ******** 2026-03-28 01:39:54.285055 | orchestrator | changed: [testbed-manager] 2026-03-28 01:39:54.285074 | orchestrator | 2026-03-28 01:39:54.285093 | orchestrator | TASK [osism.validations.tempest : Copy tempest wrapper script] ***************** 2026-03-28 01:39:54.285113 | orchestrator | Saturday 28 March 2026 01:38:32 +0000 (0:00:01.175) 0:00:01.560 ******** 2026-03-28 01:39:54.285134 | orchestrator | changed: [testbed-manager] 2026-03-28 01:39:54.285153 | orchestrator | 2026-03-28 01:39:54.285172 | orchestrator | TASK [osism.validations.tempest : Check for existing tempest initialisation] *** 2026-03-28 01:39:54.285184 | orchestrator | Saturday 28 March 2026 01:38:33 +0000 (0:00:01.448) 0:00:03.008 ******** 2026-03-28 01:39:54.285195 | orchestrator | ok: [testbed-manager] 2026-03-28 01:39:54.285207 | orchestrator | 2026-03-28 01:39:54.285219 | orchestrator | TASK [osism.validations.tempest : Init tempest] ******************************** 2026-03-28 01:39:54.285262 | orchestrator | Saturday 28 March 2026 01:38:34 +0000 (0:00:00.482) 0:00:03.491 ******** 2026-03-28 01:39:54.285274 | orchestrator | changed: [testbed-manager] 2026-03-28 01:39:54.285286 | orchestrator | 2026-03-28 01:39:54.285297 | orchestrator | TASK [osism.validations.tempest : Resolve image IDs] *************************** 2026-03-28 01:39:54.285313 | orchestrator | Saturday 28 March 2026 01:38:56 +0000 (0:00:22.658) 0:00:26.149 ******** 2026-03-28 01:39:54.285374 | orchestrator | ok: [testbed-manager -> localhost] => (item=Cirros 0.6.3) 2026-03-28 01:39:54.285395 | orchestrator | ok: [testbed-manager -> localhost] => (item=Cirros 0.6.2) 2026-03-28 01:39:54.285417 | orchestrator | 2026-03-28 01:39:54.285458 | orchestrator | TASK [osism.validations.tempest : Assert images have been resolved] ************ 2026-03-28 01:39:54.285494 | orchestrator | Saturday 28 March 2026 01:39:06 +0000 (0:00:09.795) 0:00:35.945 ******** 2026-03-28 01:39:54.285514 | orchestrator | ok: [testbed-manager] => { 2026-03-28 01:39:54.285533 | orchestrator |  "changed": false, 2026-03-28 01:39:54.285552 | orchestrator |  "msg": "All assertions passed" 2026-03-28 01:39:54.285567 | orchestrator | } 2026-03-28 01:39:54.285578 | orchestrator | 2026-03-28 01:39:54.285590 | orchestrator | TASK [osism.validations.tempest : Get auth token] ****************************** 2026-03-28 01:39:54.285601 | orchestrator | Saturday 28 March 2026 01:39:06 +0000 (0:00:00.214) 0:00:36.159 ******** 2026-03-28 01:39:54.285611 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 01:39:54.285623 | orchestrator | 2026-03-28 01:39:54.285634 | orchestrator | TASK [osism.validations.tempest : Get endpoint catalog] ************************ 2026-03-28 01:39:54.285645 | orchestrator | Saturday 28 March 2026 01:39:11 +0000 (0:00:04.272) 0:00:40.432 ******** 2026-03-28 01:39:54.285656 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 01:39:54.285667 | orchestrator | 2026-03-28 01:39:54.285678 | orchestrator | TASK [osism.validations.tempest : Get service catalog] ************************* 2026-03-28 01:39:54.285697 | orchestrator | Saturday 28 March 2026 01:39:13 +0000 (0:00:02.149) 0:00:42.581 ******** 2026-03-28 01:39:54.285725 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 01:39:54.285744 | orchestrator | 2026-03-28 01:39:54.285762 | orchestrator | TASK [osism.validations.tempest : Register img_file name] ********************** 2026-03-28 01:39:54.285779 | orchestrator | Saturday 28 March 2026 01:39:17 +0000 (0:00:04.408) 0:00:46.990 ******** 2026-03-28 01:39:54.285794 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 01:39:54.285810 | orchestrator | 2026-03-28 01:39:54.285828 | orchestrator | TASK [osism.validations.tempest : Download img_file from image_ref] ************ 2026-03-28 01:39:54.285843 | orchestrator | Saturday 28 March 2026 01:39:18 +0000 (0:00:00.234) 0:00:47.225 ******** 2026-03-28 01:39:54.285861 | orchestrator | changed: [testbed-manager] 2026-03-28 01:39:54.285879 | orchestrator | 2026-03-28 01:39:54.285898 | orchestrator | TASK [osism.validations.tempest : Install qemu-utils package] ****************** 2026-03-28 01:39:54.285915 | orchestrator | Saturday 28 March 2026 01:39:21 +0000 (0:00:03.255) 0:00:50.480 ******** 2026-03-28 01:39:54.285933 | orchestrator | changed: [testbed-manager] 2026-03-28 01:39:54.285951 | orchestrator | 2026-03-28 01:39:54.285969 | orchestrator | TASK [osism.validations.tempest : Convert img_file to qcow2 format] ************ 2026-03-28 01:39:54.285987 | orchestrator | Saturday 28 March 2026 01:39:32 +0000 (0:00:10.898) 0:01:01.378 ******** 2026-03-28 01:39:54.286005 | orchestrator | changed: [testbed-manager] 2026-03-28 01:39:54.286107 | orchestrator | 2026-03-28 01:39:54.286123 | orchestrator | TASK [osism.validations.tempest : Get network API extensions] ****************** 2026-03-28 01:39:54.286135 | orchestrator | Saturday 28 March 2026 01:39:32 +0000 (0:00:00.781) 0:01:02.160 ******** 2026-03-28 01:39:54.286153 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 01:39:54.286171 | orchestrator | 2026-03-28 01:39:54.286191 | orchestrator | TASK [osism.validations.tempest : Revoke token] ******************************** 2026-03-28 01:39:54.286210 | orchestrator | Saturday 28 March 2026 01:39:34 +0000 (0:00:01.820) 0:01:03.980 ******** 2026-03-28 01:39:54.286224 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 01:39:54.286260 | orchestrator | 2026-03-28 01:39:54.286271 | orchestrator | TASK [osism.validations.tempest : Set fact for config option api_extensions] *** 2026-03-28 01:39:54.286282 | orchestrator | Saturday 28 March 2026 01:39:36 +0000 (0:00:01.732) 0:01:05.713 ******** 2026-03-28 01:39:54.286293 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 01:39:54.286304 | orchestrator | 2026-03-28 01:39:54.286315 | orchestrator | TASK [osism.validations.tempest : Set fact for config option img_file] ********* 2026-03-28 01:39:54.286342 | orchestrator | Saturday 28 March 2026 01:39:36 +0000 (0:00:00.186) 0:01:05.899 ******** 2026-03-28 01:39:54.286353 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 01:39:54.286364 | orchestrator | 2026-03-28 01:39:54.286385 | orchestrator | TASK [osism.validations.tempest : Resolve floating network ID] ***************** 2026-03-28 01:39:54.286397 | orchestrator | Saturday 28 March 2026 01:39:37 +0000 (0:00:00.436) 0:01:06.336 ******** 2026-03-28 01:39:54.286408 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-28 01:39:54.286418 | orchestrator | 2026-03-28 01:39:54.286429 | orchestrator | TASK [osism.validations.tempest : Assert floating network id has been resolved] *** 2026-03-28 01:39:54.286465 | orchestrator | Saturday 28 March 2026 01:39:41 +0000 (0:00:04.341) 0:01:10.678 ******** 2026-03-28 01:39:54.286477 | orchestrator | ok: [testbed-manager -> localhost] => { 2026-03-28 01:39:54.286488 | orchestrator |  "changed": false, 2026-03-28 01:39:54.286499 | orchestrator |  "msg": "All assertions passed" 2026-03-28 01:39:54.286510 | orchestrator | } 2026-03-28 01:39:54.286521 | orchestrator | 2026-03-28 01:39:54.286534 | orchestrator | TASK [osism.validations.tempest : Resolve flavor IDs] ************************** 2026-03-28 01:39:54.286545 | orchestrator | Saturday 28 March 2026 01:39:41 +0000 (0:00:00.201) 0:01:10.879 ******** 2026-03-28 01:39:54.286557 | orchestrator | skipping: [testbed-manager] => (item={'name': 'tempest-1', 'vcpus': 1, 'ram': 1024, 'disk': 1})  2026-03-28 01:39:54.286570 | orchestrator | skipping: [testbed-manager] => (item={'name': 'tempest-2', 'vcpus': 2, 'ram': 2048, 'disk': 2})  2026-03-28 01:39:54.286581 | orchestrator | skipping: [testbed-manager] 2026-03-28 01:39:54.286592 | orchestrator | 2026-03-28 01:39:54.286603 | orchestrator | TASK [osism.validations.tempest : Assert flavors have been resolved] *********** 2026-03-28 01:39:54.286614 | orchestrator | Saturday 28 March 2026 01:39:41 +0000 (0:00:00.194) 0:01:11.074 ******** 2026-03-28 01:39:54.286625 | orchestrator | skipping: [testbed-manager] 2026-03-28 01:39:54.286635 | orchestrator | 2026-03-28 01:39:54.286647 | orchestrator | TASK [osism.validations.tempest : Get stats of exclude list] ******************* 2026-03-28 01:39:54.286658 | orchestrator | Saturday 28 March 2026 01:39:42 +0000 (0:00:00.202) 0:01:11.276 ******** 2026-03-28 01:39:54.286668 | orchestrator | ok: [testbed-manager] 2026-03-28 01:39:54.286679 | orchestrator | 2026-03-28 01:39:54.286690 | orchestrator | TASK [osism.validations.tempest : Copy exclude list] *************************** 2026-03-28 01:39:54.286702 | orchestrator | Saturday 28 March 2026 01:39:42 +0000 (0:00:00.526) 0:01:11.802 ******** 2026-03-28 01:39:54.286713 | orchestrator | changed: [testbed-manager] 2026-03-28 01:39:54.286724 | orchestrator | 2026-03-28 01:39:54.286735 | orchestrator | TASK [osism.validations.tempest : Get stats of include list] ******************* 2026-03-28 01:39:54.286746 | orchestrator | Saturday 28 March 2026 01:39:43 +0000 (0:00:00.959) 0:01:12.762 ******** 2026-03-28 01:39:54.286757 | orchestrator | ok: [testbed-manager] 2026-03-28 01:39:54.286768 | orchestrator | 2026-03-28 01:39:54.286779 | orchestrator | TASK [osism.validations.tempest : Copy include list] *************************** 2026-03-28 01:39:54.286790 | orchestrator | Saturday 28 March 2026 01:39:44 +0000 (0:00:00.465) 0:01:13.227 ******** 2026-03-28 01:39:54.286800 | orchestrator | skipping: [testbed-manager] 2026-03-28 01:39:54.286811 | orchestrator | 2026-03-28 01:39:54.286822 | orchestrator | TASK [osism.validations.tempest : Create tempest flavors] ********************** 2026-03-28 01:39:54.286833 | orchestrator | Saturday 28 March 2026 01:39:44 +0000 (0:00:00.348) 0:01:13.576 ******** 2026-03-28 01:39:54.286844 | orchestrator | changed: [testbed-manager -> localhost] => (item={'name': 'tempest-1', 'vcpus': 1, 'ram': 1024, 'disk': 1}) 2026-03-28 01:39:54.286856 | orchestrator | changed: [testbed-manager -> localhost] => (item={'name': 'tempest-2', 'vcpus': 2, 'ram': 2048, 'disk': 2}) 2026-03-28 01:39:54.286867 | orchestrator | 2026-03-28 01:39:54.286878 | orchestrator | TASK [osism.validations.tempest : Copy tempest.conf file] ********************** 2026-03-28 01:39:54.286889 | orchestrator | Saturday 28 March 2026 01:39:53 +0000 (0:00:08.797) 0:01:22.373 ******** 2026-03-28 01:39:54.286900 | orchestrator | changed: [testbed-manager] 2026-03-28 01:39:54.286918 | orchestrator | 2026-03-28 01:39:54.286929 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-28 01:39:54.286942 | orchestrator | testbed-manager : ok=24  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-28 01:39:54.286954 | orchestrator | 2026-03-28 01:39:54.286965 | orchestrator | 2026-03-28 01:39:54.286976 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-28 01:39:54.286987 | orchestrator | Saturday 28 March 2026 01:39:54 +0000 (0:00:01.102) 0:01:23.475 ******** 2026-03-28 01:39:54.286998 | orchestrator | =============================================================================== 2026-03-28 01:39:54.287009 | orchestrator | osism.validations.tempest : Init tempest ------------------------------- 22.66s 2026-03-28 01:39:54.287020 | orchestrator | osism.validations.tempest : Install qemu-utils package ----------------- 10.90s 2026-03-28 01:39:54.287031 | orchestrator | osism.validations.tempest : Resolve image IDs --------------------------- 9.80s 2026-03-28 01:39:54.287042 | orchestrator | osism.validations.tempest : Create tempest flavors ---------------------- 8.80s 2026-03-28 01:39:54.287058 | orchestrator | osism.validations.tempest : Get service catalog ------------------------- 4.41s 2026-03-28 01:39:54.287069 | orchestrator | osism.validations.tempest : Resolve floating network ID ----------------- 4.34s 2026-03-28 01:39:54.287081 | orchestrator | osism.validations.tempest : Get auth token ------------------------------ 4.27s 2026-03-28 01:39:54.287092 | orchestrator | osism.validations.tempest : Download img_file from image_ref ------------ 3.26s 2026-03-28 01:39:54.287103 | orchestrator | osism.validations.tempest : Get endpoint catalog ------------------------ 2.15s 2026-03-28 01:39:54.287119 | orchestrator | osism.validations.tempest : Get network API extensions ------------------ 1.82s 2026-03-28 01:39:54.287137 | orchestrator | osism.validations.tempest : Revoke token -------------------------------- 1.73s 2026-03-28 01:39:54.287154 | orchestrator | osism.validations.tempest : Copy tempest wrapper script ----------------- 1.45s 2026-03-28 01:39:54.287173 | orchestrator | osism.validations.tempest : Create tempest workdir ---------------------- 1.18s 2026-03-28 01:39:54.287191 | orchestrator | osism.validations.tempest : Copy tempest.conf file ---------------------- 1.10s 2026-03-28 01:39:54.287210 | orchestrator | osism.validations.tempest : Copy exclude list --------------------------- 0.96s 2026-03-28 01:39:54.287295 | orchestrator | osism.validations.tempest : Convert img_file to qcow2 format ------------ 0.78s 2026-03-28 01:39:54.287309 | orchestrator | osism.validations.tempest : Get stats of exclude list ------------------- 0.53s 2026-03-28 01:39:54.287330 | orchestrator | osism.validations.tempest : Check for existing tempest initialisation --- 0.48s 2026-03-28 01:39:54.587790 | orchestrator | osism.validations.tempest : Get stats of include list ------------------- 0.47s 2026-03-28 01:39:54.587920 | orchestrator | osism.validations.tempest : Set fact for config option img_file --------- 0.44s 2026-03-28 01:39:54.827378 | orchestrator | + sed -i '/log_dir =/d' /opt/tempest/etc/tempest.conf 2026-03-28 01:39:54.833043 | orchestrator | + sed -i '/log_file =/d' /opt/tempest/etc/tempest.conf 2026-03-28 01:39:54.838782 | orchestrator | + [[ false == \t\r\u\e ]] 2026-03-28 01:39:54.839110 | orchestrator | 2026-03-28 01:39:54.839142 | orchestrator | ## IDENTITY (API) 2026-03-28 01:39:54.839164 | orchestrator | 2026-03-28 01:39:54.839184 | orchestrator | + echo 2026-03-28 01:39:54.839205 | orchestrator | + echo '## IDENTITY (API)' 2026-03-28 01:39:54.839248 | orchestrator | + echo 2026-03-28 01:39:54.839271 | orchestrator | + _tempest tempest.api.identity.v3 2026-03-28 01:39:54.839293 | orchestrator | + local regex=tempest.api.identity.v3 2026-03-28 01:39:54.840394 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.identity.v3 --concurrency 16 2026-03-28 01:39:54.841481 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-28 01:39:54.844356 | orchestrator | + tee -a /opt/tempest/20260328-0139.log 2026-03-28 01:39:58.794528 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.identity.v3 --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-28 01:39:58.794635 | orchestrator | Did you mean one of these? 2026-03-28 01:39:58.794646 | orchestrator | help 2026-03-28 01:39:58.794653 | orchestrator | init 2026-03-28 01:39:59.266442 | orchestrator | 2026-03-28 01:39:59.266534 | orchestrator | ## IMAGE (API) 2026-03-28 01:39:59.266551 | orchestrator | 2026-03-28 01:39:59.266563 | orchestrator | + echo 2026-03-28 01:39:59.266574 | orchestrator | + echo '## IMAGE (API)' 2026-03-28 01:39:59.266586 | orchestrator | + echo 2026-03-28 01:39:59.266597 | orchestrator | + _tempest tempest.api.image.v2 2026-03-28 01:39:59.266609 | orchestrator | + local regex=tempest.api.image.v2 2026-03-28 01:39:59.266824 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.image.v2 --concurrency 16 2026-03-28 01:39:59.267303 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-28 01:39:59.269300 | orchestrator | + tee -a /opt/tempest/20260328-0139.log 2026-03-28 01:40:03.409162 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.image.v2 --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-28 01:40:03.409257 | orchestrator | Did you mean one of these? 2026-03-28 01:40:03.409266 | orchestrator | help 2026-03-28 01:40:03.409271 | orchestrator | init 2026-03-28 01:40:03.872882 | orchestrator | 2026-03-28 01:40:03.872975 | orchestrator | ## NETWORK (API) 2026-03-28 01:40:03.873014 | orchestrator | 2026-03-28 01:40:03.873026 | orchestrator | + echo 2026-03-28 01:40:03.873032 | orchestrator | + echo '## NETWORK (API)' 2026-03-28 01:40:03.873040 | orchestrator | + echo 2026-03-28 01:40:03.873046 | orchestrator | + _tempest tempest.api.network 2026-03-28 01:40:03.873052 | orchestrator | + local regex=tempest.api.network 2026-03-28 01:40:03.873400 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.network --concurrency 16 2026-03-28 01:40:03.875749 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-28 01:40:03.882184 | orchestrator | + tee -a /opt/tempest/20260328-0140.log 2026-03-28 01:40:07.987049 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.network --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-28 01:40:07.987166 | orchestrator | Did you mean one of these? 2026-03-28 01:40:07.987184 | orchestrator | help 2026-03-28 01:40:07.987209 | orchestrator | init 2026-03-28 01:40:08.452547 | orchestrator | 2026-03-28 01:40:08.452624 | orchestrator | ## VOLUME (API) 2026-03-28 01:40:08.452634 | orchestrator | 2026-03-28 01:40:08.452642 | orchestrator | + echo 2026-03-28 01:40:08.452649 | orchestrator | + echo '## VOLUME (API)' 2026-03-28 01:40:08.452656 | orchestrator | + echo 2026-03-28 01:40:08.452662 | orchestrator | + _tempest tempest.api.volume 2026-03-28 01:40:08.452668 | orchestrator | + local regex=tempest.api.volume 2026-03-28 01:40:08.453979 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.volume --concurrency 16 2026-03-28 01:40:08.454141 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-28 01:40:08.459748 | orchestrator | + tee -a /opt/tempest/20260328-0140.log 2026-03-28 01:40:12.494327 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.volume --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-28 01:40:12.494396 | orchestrator | Did you mean one of these? 2026-03-28 01:40:12.494403 | orchestrator | help 2026-03-28 01:40:12.494408 | orchestrator | init 2026-03-28 01:40:12.954650 | orchestrator | 2026-03-28 01:40:12.954741 | orchestrator | ## COMPUTE (API) 2026-03-28 01:40:12.954757 | orchestrator | 2026-03-28 01:40:12.954768 | orchestrator | + echo 2026-03-28 01:40:12.954777 | orchestrator | + echo '## COMPUTE (API)' 2026-03-28 01:40:12.954787 | orchestrator | + echo 2026-03-28 01:40:12.954796 | orchestrator | + _tempest tempest.api.compute 2026-03-28 01:40:12.954832 | orchestrator | + local regex=tempest.api.compute 2026-03-28 01:40:12.955005 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.compute --concurrency 16 2026-03-28 01:40:12.957077 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-28 01:40:12.960318 | orchestrator | + tee -a /opt/tempest/20260328-0140.log 2026-03-28 01:40:17.020345 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.compute --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-28 01:40:17.020516 | orchestrator | Did you mean one of these? 2026-03-28 01:40:17.020535 | orchestrator | help 2026-03-28 01:40:17.020542 | orchestrator | init 2026-03-28 01:40:17.477131 | orchestrator | 2026-03-28 01:40:17.477194 | orchestrator | ## DNS (API) 2026-03-28 01:40:17.477200 | orchestrator | 2026-03-28 01:40:17.477204 | orchestrator | + echo 2026-03-28 01:40:17.477209 | orchestrator | + echo '## DNS (API)' 2026-03-28 01:40:17.477241 | orchestrator | + echo 2026-03-28 01:40:17.477246 | orchestrator | + _tempest designate_tempest_plugin.tests.api.v2 2026-03-28 01:40:17.477265 | orchestrator | + local regex=designate_tempest_plugin.tests.api.v2 2026-03-28 01:40:17.478950 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex designate_tempest_plugin.tests.api.v2 --concurrency 16 2026-03-28 01:40:17.478963 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-28 01:40:17.481350 | orchestrator | + tee -a /opt/tempest/20260328-0140.log 2026-03-28 01:40:21.329387 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex designate_tempest_plugin.tests.api.v2 --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-28 01:40:21.329456 | orchestrator | Did you mean one of these? 2026-03-28 01:40:21.329463 | orchestrator | help 2026-03-28 01:40:21.329468 | orchestrator | init 2026-03-28 01:40:21.838045 | orchestrator | 2026-03-28 01:40:21.838113 | orchestrator | ## OBJECT-STORE (API) 2026-03-28 01:40:21.838120 | orchestrator | 2026-03-28 01:40:21.838125 | orchestrator | + echo 2026-03-28 01:40:21.838129 | orchestrator | + echo '## OBJECT-STORE (API)' 2026-03-28 01:40:21.838134 | orchestrator | + echo 2026-03-28 01:40:21.838138 | orchestrator | + _tempest tempest.api.object_storage 2026-03-28 01:40:21.838143 | orchestrator | + local regex=tempest.api.object_storage 2026-03-28 01:40:21.839177 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.object_storage --concurrency 16 2026-03-28 01:40:21.839507 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-28 01:40:21.842387 | orchestrator | + tee -a /opt/tempest/20260328-0140.log 2026-03-28 01:40:25.689995 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.object_storage --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-28 01:40:25.690088 | orchestrator | Did you mean one of these? 2026-03-28 01:40:25.690097 | orchestrator | help 2026-03-28 01:40:25.690104 | orchestrator | init 2026-03-28 01:40:26.186970 | orchestrator | ok: Runtime: 0:02:11.291801 2026-03-28 01:40:26.203307 | 2026-03-28 01:40:26.203489 | TASK [Check prometheus alert status] 2026-03-28 01:40:26.742147 | orchestrator | skipping: Conditional result was False 2026-03-28 01:40:26.744421 | 2026-03-28 01:40:26.744773 | PLAY RECAP 2026-03-28 01:40:26.744856 | orchestrator | ok: 25 changed: 12 unreachable: 0 failed: 0 skipped: 4 rescued: 0 ignored: 0 2026-03-28 01:40:26.744883 | 2026-03-28 01:40:27.011790 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2026-03-28 01:40:27.012979 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-03-28 01:40:27.846883 | 2026-03-28 01:40:27.847061 | PLAY [Post output play] 2026-03-28 01:40:27.870094 | 2026-03-28 01:40:27.870316 | LOOP [stage-output : Register sources] 2026-03-28 01:40:27.941579 | 2026-03-28 01:40:27.941894 | TASK [stage-output : Check sudo] 2026-03-28 01:40:28.866454 | orchestrator | sudo: a password is required 2026-03-28 01:40:28.987972 | orchestrator | ok: Runtime: 0:00:00.014468 2026-03-28 01:40:29.003153 | 2026-03-28 01:40:29.003339 | LOOP [stage-output : Set source and destination for files and folders] 2026-03-28 01:40:29.043459 | 2026-03-28 01:40:29.043835 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-03-28 01:40:29.112654 | orchestrator | ok 2026-03-28 01:40:29.123486 | 2026-03-28 01:40:29.123723 | LOOP [stage-output : Ensure target folders exist] 2026-03-28 01:40:29.596627 | orchestrator | ok: "docs" 2026-03-28 01:40:29.596984 | 2026-03-28 01:40:29.850665 | orchestrator | ok: "artifacts" 2026-03-28 01:40:30.140475 | orchestrator | ok: "logs" 2026-03-28 01:40:30.163687 | 2026-03-28 01:40:30.163900 | LOOP [stage-output : Copy files and folders to staging folder] 2026-03-28 01:40:30.204147 | 2026-03-28 01:40:30.204469 | TASK [stage-output : Make all log files readable] 2026-03-28 01:40:30.549273 | orchestrator | ok 2026-03-28 01:40:30.559754 | 2026-03-28 01:40:30.559934 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-03-28 01:40:30.595525 | orchestrator | skipping: Conditional result was False 2026-03-28 01:40:30.615119 | 2026-03-28 01:40:30.615301 | TASK [stage-output : Discover log files for compression] 2026-03-28 01:40:30.640074 | orchestrator | skipping: Conditional result was False 2026-03-28 01:40:30.653508 | 2026-03-28 01:40:30.653802 | LOOP [stage-output : Archive everything from logs] 2026-03-28 01:40:30.704052 | 2026-03-28 01:40:30.704299 | PLAY [Post cleanup play] 2026-03-28 01:40:30.720942 | 2026-03-28 01:40:30.721098 | TASK [Set cloud fact (Zuul deployment)] 2026-03-28 01:40:30.777034 | orchestrator | ok 2026-03-28 01:40:30.788771 | 2026-03-28 01:40:30.788956 | TASK [Set cloud fact (local deployment)] 2026-03-28 01:40:30.814742 | orchestrator | skipping: Conditional result was False 2026-03-28 01:40:30.832122 | 2026-03-28 01:40:30.832281 | TASK [Clean the cloud environment] 2026-03-28 01:40:31.489855 | orchestrator | 2026-03-28 01:40:31 - clean up servers 2026-03-28 01:40:32.259185 | orchestrator | 2026-03-28 01:40:32 - testbed-manager 2026-03-28 01:40:32.340490 | orchestrator | 2026-03-28 01:40:32 - testbed-node-3 2026-03-28 01:40:32.424600 | orchestrator | 2026-03-28 01:40:32 - testbed-node-5 2026-03-28 01:40:32.528733 | orchestrator | 2026-03-28 01:40:32 - testbed-node-4 2026-03-28 01:40:32.630033 | orchestrator | 2026-03-28 01:40:32 - testbed-node-1 2026-03-28 01:40:32.721562 | orchestrator | 2026-03-28 01:40:32 - testbed-node-0 2026-03-28 01:40:32.814154 | orchestrator | 2026-03-28 01:40:32 - testbed-node-2 2026-03-28 01:40:32.903393 | orchestrator | 2026-03-28 01:40:32 - clean up keypairs 2026-03-28 01:40:32.918401 | orchestrator | 2026-03-28 01:40:32 - testbed 2026-03-28 01:40:32.943029 | orchestrator | 2026-03-28 01:40:32 - wait for servers to be gone 2026-03-28 01:40:46.351924 | orchestrator | 2026-03-28 01:40:46 - clean up ports 2026-03-28 01:40:46.534686 | orchestrator | 2026-03-28 01:40:46 - 247ef05b-ebf7-4b73-bd8d-b302c4227a84 2026-03-28 01:40:46.808252 | orchestrator | 2026-03-28 01:40:46 - 46d8b22f-ed1f-4ee4-87ae-3dd3cbe95fab 2026-03-28 01:40:47.056919 | orchestrator | 2026-03-28 01:40:47 - 6ef56f71-0506-4edb-babc-dd9b7106c4ce 2026-03-28 01:40:47.294503 | orchestrator | 2026-03-28 01:40:47 - 887afeba-cb4a-4926-b5a1-5cef64993f96 2026-03-28 01:40:47.680021 | orchestrator | 2026-03-28 01:40:47 - 8c3535d3-d852-4a39-b365-08c50b364a7b 2026-03-28 01:40:47.883346 | orchestrator | 2026-03-28 01:40:47 - d2981187-0f6a-4621-8927-8aae508b467f 2026-03-28 01:40:48.100175 | orchestrator | 2026-03-28 01:40:48 - e6391af9-60e0-4ddd-9db2-a28ae9b090cd 2026-03-28 01:40:48.307680 | orchestrator | 2026-03-28 01:40:48 - clean up volumes 2026-03-28 01:40:48.429326 | orchestrator | 2026-03-28 01:40:48 - testbed-volume-4-node-base 2026-03-28 01:40:48.465873 | orchestrator | 2026-03-28 01:40:48 - testbed-volume-1-node-base 2026-03-28 01:40:48.504954 | orchestrator | 2026-03-28 01:40:48 - testbed-volume-5-node-base 2026-03-28 01:40:48.549619 | orchestrator | 2026-03-28 01:40:48 - testbed-volume-manager-base 2026-03-28 01:40:48.590954 | orchestrator | 2026-03-28 01:40:48 - testbed-volume-3-node-base 2026-03-28 01:40:48.635947 | orchestrator | 2026-03-28 01:40:48 - testbed-volume-2-node-base 2026-03-28 01:40:48.679406 | orchestrator | 2026-03-28 01:40:48 - testbed-volume-0-node-base 2026-03-28 01:40:48.722838 | orchestrator | 2026-03-28 01:40:48 - testbed-volume-4-node-4 2026-03-28 01:40:48.764035 | orchestrator | 2026-03-28 01:40:48 - testbed-volume-0-node-3 2026-03-28 01:40:48.811300 | orchestrator | 2026-03-28 01:40:48 - testbed-volume-7-node-4 2026-03-28 01:40:48.855439 | orchestrator | 2026-03-28 01:40:48 - testbed-volume-3-node-3 2026-03-28 01:40:48.900967 | orchestrator | 2026-03-28 01:40:48 - testbed-volume-6-node-3 2026-03-28 01:40:48.944929 | orchestrator | 2026-03-28 01:40:48 - testbed-volume-5-node-5 2026-03-28 01:40:48.987448 | orchestrator | 2026-03-28 01:40:48 - testbed-volume-2-node-5 2026-03-28 01:40:49.030098 | orchestrator | 2026-03-28 01:40:49 - testbed-volume-8-node-5 2026-03-28 01:40:49.073711 | orchestrator | 2026-03-28 01:40:49 - testbed-volume-1-node-4 2026-03-28 01:40:49.115163 | orchestrator | 2026-03-28 01:40:49 - disconnect routers 2026-03-28 01:40:49.250538 | orchestrator | 2026-03-28 01:40:49 - testbed 2026-03-28 01:40:50.729662 | orchestrator | 2026-03-28 01:40:50 - clean up subnets 2026-03-28 01:40:50.777448 | orchestrator | 2026-03-28 01:40:50 - subnet-testbed-management 2026-03-28 01:40:50.954468 | orchestrator | 2026-03-28 01:40:50 - clean up networks 2026-03-28 01:40:51.112877 | orchestrator | 2026-03-28 01:40:51 - net-testbed-management 2026-03-28 01:40:51.441743 | orchestrator | 2026-03-28 01:40:51 - clean up security groups 2026-03-28 01:40:51.484505 | orchestrator | 2026-03-28 01:40:51 - testbed-management 2026-03-28 01:40:51.593844 | orchestrator | 2026-03-28 01:40:51 - testbed-node 2026-03-28 01:40:51.702245 | orchestrator | 2026-03-28 01:40:51 - clean up floating ips 2026-03-28 01:40:51.740479 | orchestrator | 2026-03-28 01:40:51 - 81.163.193.109 2026-03-28 01:40:52.097669 | orchestrator | 2026-03-28 01:40:52 - clean up routers 2026-03-28 01:40:52.198284 | orchestrator | 2026-03-28 01:40:52 - testbed 2026-03-28 01:40:53.891099 | orchestrator | ok: Runtime: 0:00:22.385672 2026-03-28 01:40:53.895566 | 2026-03-28 01:40:53.895737 | PLAY RECAP 2026-03-28 01:40:53.895867 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-03-28 01:40:53.895929 | 2026-03-28 01:40:54.037868 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-03-28 01:40:54.040580 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-03-28 01:40:54.789042 | 2026-03-28 01:40:54.789212 | PLAY [Cleanup play] 2026-03-28 01:40:54.806322 | 2026-03-28 01:40:54.806475 | TASK [Set cloud fact (Zuul deployment)] 2026-03-28 01:40:54.874727 | orchestrator | ok 2026-03-28 01:40:54.885286 | 2026-03-28 01:40:54.885467 | TASK [Set cloud fact (local deployment)] 2026-03-28 01:40:54.923051 | orchestrator | skipping: Conditional result was False 2026-03-28 01:40:54.942116 | 2026-03-28 01:40:54.942297 | TASK [Clean the cloud environment] 2026-03-28 01:40:56.186274 | orchestrator | 2026-03-28 01:40:56 - clean up servers 2026-03-28 01:40:56.673897 | orchestrator | 2026-03-28 01:40:56 - clean up keypairs 2026-03-28 01:40:56.692155 | orchestrator | 2026-03-28 01:40:56 - wait for servers to be gone 2026-03-28 01:40:56.737377 | orchestrator | 2026-03-28 01:40:56 - clean up ports 2026-03-28 01:40:56.818124 | orchestrator | 2026-03-28 01:40:56 - clean up volumes 2026-03-28 01:40:56.880839 | orchestrator | 2026-03-28 01:40:56 - disconnect routers 2026-03-28 01:40:56.911752 | orchestrator | 2026-03-28 01:40:56 - clean up subnets 2026-03-28 01:40:56.931401 | orchestrator | 2026-03-28 01:40:56 - clean up networks 2026-03-28 01:40:57.052067 | orchestrator | 2026-03-28 01:40:57 - clean up security groups 2026-03-28 01:40:57.090514 | orchestrator | 2026-03-28 01:40:57 - clean up floating ips 2026-03-28 01:40:57.113794 | orchestrator | 2026-03-28 01:40:57 - clean up routers 2026-03-28 01:40:57.482425 | orchestrator | ok: Runtime: 0:00:01.364956 2026-03-28 01:40:57.485097 | 2026-03-28 01:40:57.485211 | PLAY RECAP 2026-03-28 01:40:57.485304 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-03-28 01:40:57.485346 | 2026-03-28 01:40:57.622338 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-03-28 01:40:57.625144 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-03-28 01:40:58.403760 | 2026-03-28 01:40:58.403931 | PLAY [Base post-fetch] 2026-03-28 01:40:58.420348 | 2026-03-28 01:40:58.420510 | TASK [fetch-output : Set log path for multiple nodes] 2026-03-28 01:40:58.486602 | orchestrator | skipping: Conditional result was False 2026-03-28 01:40:58.501860 | 2026-03-28 01:40:58.502100 | TASK [fetch-output : Set log path for single node] 2026-03-28 01:40:58.570427 | orchestrator | ok 2026-03-28 01:40:58.579291 | 2026-03-28 01:40:58.579455 | LOOP [fetch-output : Ensure local output dirs] 2026-03-28 01:40:59.068704 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/dbc4c42d8cae461abd33bd0788dfae71/work/logs" 2026-03-28 01:40:59.350585 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/dbc4c42d8cae461abd33bd0788dfae71/work/artifacts" 2026-03-28 01:40:59.650921 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/dbc4c42d8cae461abd33bd0788dfae71/work/docs" 2026-03-28 01:40:59.675377 | 2026-03-28 01:40:59.675604 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-03-28 01:41:00.698125 | orchestrator | changed: .d..t...... ./ 2026-03-28 01:41:00.698483 | orchestrator | changed: All items complete 2026-03-28 01:41:00.698566 | 2026-03-28 01:41:01.511242 | orchestrator | changed: .d..t...... ./ 2026-03-28 01:41:02.288155 | orchestrator | changed: .d..t...... ./ 2026-03-28 01:41:02.313885 | 2026-03-28 01:41:02.314047 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-03-28 01:41:02.346233 | orchestrator | skipping: Conditional result was False 2026-03-28 01:41:02.352015 | orchestrator | skipping: Conditional result was False 2026-03-28 01:41:02.370138 | 2026-03-28 01:41:02.370269 | PLAY RECAP 2026-03-28 01:41:02.370344 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-03-28 01:41:02.370381 | 2026-03-28 01:41:02.512647 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-03-28 01:41:02.516790 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-28 01:41:03.314617 | 2026-03-28 01:41:03.314819 | PLAY [Base post] 2026-03-28 01:41:03.331119 | 2026-03-28 01:41:03.331309 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-03-28 01:41:04.395947 | orchestrator | changed 2026-03-28 01:41:04.406999 | 2026-03-28 01:41:04.407176 | PLAY RECAP 2026-03-28 01:41:04.407253 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-03-28 01:41:04.407326 | 2026-03-28 01:41:04.559622 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-28 01:41:04.561306 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-03-28 01:41:05.402155 | 2026-03-28 01:41:05.402345 | PLAY [Base post-logs] 2026-03-28 01:41:05.413869 | 2026-03-28 01:41:05.414130 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-03-28 01:41:05.901703 | localhost | changed 2026-03-28 01:41:05.912681 | 2026-03-28 01:41:05.912991 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-03-28 01:41:05.949579 | localhost | ok 2026-03-28 01:41:05.953864 | 2026-03-28 01:41:05.953992 | TASK [Set zuul-log-path fact] 2026-03-28 01:41:05.970089 | localhost | ok 2026-03-28 01:41:05.982660 | 2026-03-28 01:41:05.982821 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-28 01:41:06.021502 | localhost | ok 2026-03-28 01:41:06.028692 | 2026-03-28 01:41:06.028889 | TASK [upload-logs : Create log directories] 2026-03-28 01:41:06.656889 | localhost | changed 2026-03-28 01:41:06.661675 | 2026-03-28 01:41:06.661845 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-03-28 01:41:07.164280 | localhost -> localhost | ok: Runtime: 0:00:00.007835 2026-03-28 01:41:07.168625 | 2026-03-28 01:41:07.168752 | TASK [upload-logs : Upload logs to log server] 2026-03-28 01:41:07.777791 | localhost | Output suppressed because no_log was given 2026-03-28 01:41:07.782262 | 2026-03-28 01:41:07.782498 | LOOP [upload-logs : Compress console log and json output] 2026-03-28 01:41:07.856369 | localhost | skipping: Conditional result was False 2026-03-28 01:41:07.861618 | localhost | skipping: Conditional result was False 2026-03-28 01:41:07.868425 | 2026-03-28 01:41:07.868633 | LOOP [upload-logs : Upload compressed console log and json output] 2026-03-28 01:41:07.918233 | localhost | skipping: Conditional result was False 2026-03-28 01:41:07.918788 | 2026-03-28 01:41:07.923103 | localhost | skipping: Conditional result was False 2026-03-28 01:41:07.930495 | 2026-03-28 01:41:07.930736 | LOOP [upload-logs : Upload console log and json output]